K8S-5 -- cloud native Foundation / k8s foundation and components / binary deployment k8s cluster

Day5 job:
1. Summarize k8s each component function
2. Summarize the scheduling process of creating pod in k8s
3. Deploy k8s cluster based on binary
Network components calico, coredns, dashboard

1, Cloud native Foundation:

CNCF cloud native container ecosystem overview:
http://dockone.io/article/3006

In 2013, docker project was officially released
In 2014, the kubernetes project was officially released
In 2015, Google, Redhat and Microsoft led the establishment of CNCF (cloud native Computing Foundation)
In 18 years, CNCF has 195 members, 19 foundation projects and 11 incubation projects since its third anniversary.

Definition of cloud native:
https://github.com/cncf/toc/blob/main/DEFINITION.md#%E4%B8%AD%E6
Cloud native technology helps organizations build and run flexible and scalable applications in new dynamic environments such as public cloud, private cloud and hybrid cloud. Representative technologies of cloud native include container, service grid, micro service, immutable infrastructure and declarative API.
These technologies can build loosely coupled systems with good fault tolerance, easy management and easy observation. Combined with reliable automation means, cloud native technology enables engineers to easily make frequent and predictable major changes to the system.
The cloud native Computing Foundation (CNCF) is committed to cultivating and maintaining a vendor neutral open source ecosystem to promote cloud native technologies. We democratize the most cutting-edge models to make these innovations available to the public.

Cloud native technology stack:
Containers: in docker Container operation technology represented by.
Service Grid: for example Service Mesh Wait.
Microservice: in the microservice architecture, a project is composed of several loosely coupled and independently deployable smaller components or services.
Immutable infrastructure: immutable infrastructure can be understood as the basic operation requirements required for the operation of an application. Immutable infrastructure basically means that the server running the service will not change after deployment, such as mirroring.
Declarative APl: Describes the running state of the application, and it is up to the system to decide how to create the environment, such as declaring a pod,There will be k8s Execute create and maintain copies.

Cloud native features:
Compliance factor 12: is the way to build applications
1. Benchmark Code: one benchmark code and multiple deployments (version control is performed with the same code base, and multiple deployments can be performed).
2. Dependencies: explicitly declare and isolate dependencies between each other.
3. Configuration: stores the configuration in the environment.
4. Back end service: treat the back-end service as an additional resource.
5. Build, release, run: build or package the program, and strictly separate the build and run.
6. Process: run the application as one or more stateless processes.
7. Port binding: provides services through port binding.
8. Concurrency: extend through the process model.
9. Easy to handle: start quickly, terminate gracefully, and maintain robustness to the greatest extent.
10. The development environment is equivalent to the online environment: keep the development, pre release and online environment the same as possible.
11. Log: collect, store and display the output streams of all running processes and back-end services in chronological order.
12. Management process: the one-time management process (data backup, etc.) should use the same running environment as the normal resident process.
Microservice oriented architecture
Self - service agile architecture
API based collaboration
Resistance to vulnerability

Cloud native CNCF official website:
https://www.cncf.io/

Cloud native landscape:
https://landscape.cncf.io/
There are 16 graduation projects, including kubernetes, Prometheus, Gabor, etcd, envoy, coredns, rook and helm
24 incubation projects
Multiple sandbox projects

2, Introduction to k8s foundation and components

Official website: https://kubernetes.io/zh/
github: https://github.com/kubernetes/kubernetes
k8s design architecture: (architecture diagram, each node)
https://www.kubernetes.org.cn/kubernetes%E8%AE%BE%E8%AE%A1%E6%9E%B6%E6%9E%84
k8s official Chinese documents:
http://docs.kubernetes.org.cn/

REST-API (network interface):
https://github.com/Arachni/arachni/wiki/REST-API

Component introduction:
Introduction to components on the official website: https://kubernetes.io/zh/docs/reference/command-line-tools-reference/

1,kube-apiserver

It provides a unique entry for resource operation, and provides mechanisms such as authentication, authorization, access control, API registration and discovery;
The default port is 6443, and the configuration file is / etc / SYSTEMd / system / Kube apiserver service
Access process: authentication token - authentication authority / authentication (add, delete, modify, check, read and write) - verify whether the data / instruction / parameter is legal - execute the operation - return the result,
O & M interacts with apiserver through the command line kubectl or dashboard. When viewing data, go to etcd to view data. When creating a container, it needs to be scheduled using kubeschedule. The number of container copies to be started is maintained by kubecontroller.
kubelet accepts client instructions and manages the life cycle of the node's containers, including creating containers, deleting containers, initializing containers, detecting container probes, etc. after the containers are created, they are encapsulated in a pod, which is the smallest unit for k8s operation.
Kubeproxy network component, which maintains the network rules of the current node, including iptables or ipvs. When users access, they access pod data through kubeproxy, including url, image service, etc

2,kube-controller-manager

Maintain cluster status, such as fault detection, automatic expansion, rolling update, etc;
Be responsible for the management of Node, Pod replica, service Endpoint, Namespace, service account and resource quota in the cluster,
When a Node goes down unexpectedly, the Controller Manager will discover and execute the automatic repair process in time to ensure that the pod replica in the cluster is always in the expected working state.
pod High Availability Mechanism:
node monitor period: node monitoring period (e.g. 5s)
node monitor grace period: grace period of node monitor (e.g. 40s)
pod eviction timeout: timeout of pod eviction (e.g. 5min)
Check the status of the node every 5s. After no heartbeat is received, wait 40s and mark the node as unreachable. If it fails to recover after 5 minutes, delete the node and rebuild these pod s at other nodes.
kubectl get pod
kubectl get ep -A

The Kube Controller Manager \ Kube scheduler of the master does not directly communicate with the node, but connects to the etcd through the apiserver to query the data and know the status of the node/pod. Dekube Controller Manager \ Kube scheduler does not directly communicate with node, but connects to etcd through apiserver to query the data and know the status of node/pod.
See the configuration file for communication between master components

3,kube-scheduler

Be responsible for resource scheduling, and schedule the Pod to the corresponding machine according to the predetermined scheduling strategy;
Select the appropriate node node for pod through the scheduling algorithm and write the information to etcd.
After kubelet listens to the pod binding information through Kube apiserver, kubelet obtains the corresponding pod list and downloads the image startup container.
Strategy:
Leadrequestedpriority: preferentially select the node with the least resource consumption (CPU + memory) from the list of alternative nodes
CalculateNodeLabelPriority: preferentially select the node with the specified Label
Balanced resource allocation: select the node with the most balanced resource utilization from the list of alternative nodes.
Scheduling process for creating a pod:
Step 1: create pod s and schedule them one by one
Step 2: filter out node nodes with insufficient resources
Step 3: delete from the remaining available node nodes
Step 4: select a node

4,kubelet

Responsible for maintaining the life cycle of the container and managing Volume (CVI) and network (CNI);
Run on the woker node and monitor the pod
Report node status information to the master; Accept the instruction and create a docker container in pod; Prepare data volumes required by pod; Return the running status of pod; Perform a container health check on the node node.

5,kube-proxy

Responsible for providing Service discovery and load balancing within the cluster for services;
Relationship between Kube proxy and service: Kube proxy - watch - > k8s apiserver
Kube proxy monitors the k8s apiserver. Once the service resource changes (call k8s API to modify the service information), Kube proxy will generate the corresponding load scheduling adjustment, so as to ensure the latest state of the service.
Kube proxy runs on each node node, monitors the changes of service objects in API server, and then realizes network forwarding by managing IPtables or IPVS rules.
Monitoring service objects and endpoints
Working modes supported by different versions of Kube proxy:
UserSpace: v1.1 used before, v1 2 and later elimination
IPtables: v1.1 start support, v1 2 start as default
IPVS: v1.9 introduction, v1 11 is the official version. You need to install ipvsadm, ipset toolkit and load ip_vs Kernel Module
difference:
IPVS is more efficient. IPVS provides more options for load balancing: rr (polling scheduling), Ic (minimum connections), dh (target hash), sh (source hash), sed (shortest expected delay), nq (non queuing scheduling), etc.
Configure IPVS and specified scheduling algorithm:
https://kubernetes.io/zh/docs/reference/config-api/kube-proxy-config.v1alpha1/#ClientConnectionConfiguration

6. Others:

Container runtime:
Responsible for image management and real operation (CRI) of Pod and container;

Ingress Controller:
Provide Internet access for services

Heapster:
Provide resource monitoring

Federation:
Provide clusters across availability zones

Fluentd-elasticsearch:
Provide cluster log collection, storage and query

3, Binary based deployment k8s high availability cluster

Automated Deployment Based on binary and ansible k8s:
https://github.com/easzlab/kubeasz

Resource Recommendation:

System: kubeasz v3.1.0,k8s v20.04.3,docker v19.03.15
node:   48C 256G SSD/2T 10g/25g network card
master: 16c 16G 200G
etcd:  8c 16G 150G/SSD

Single master environment:
1master,2+node,1etcd

Multi master environment:
2master, 2+node, 3etcd, 2hproxy, 1keepalived, 2abor, 2ansible (multiplexed with Master)

Actual combat environment:
3master, 1kubeasz (multiplexed with master1), 3node, 3etcd, 2haproxy, 2abor

Minimum experimental environment:
1master+kubeasz,2node,1etcd,1haproxy+keepalived,1habor
2c/4g/40g

k8s highly available reverse proxy (haproxy and keepalived production applications):
http://blogs.studylinux.net/?p=4579

The corresponding IP address of the preparation server for this experiment:
1master+kubeasz151 152 153,
1habor154 155,
1etcd156 157 158,
1haproxy+keepalived159 160,vip188,
2node161 162,
2c/4g/40g

1. Basic environment preparation:

#ubuntu time synchronization:
# In -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
# cat /etc/default/locale
LANG=en_US.UTF-8
LC_TIME=en_DK.UTF-8
# crontab -l
 */5 * * * * /usr/sbin/ntpdate timel.aliyun.com &>/dev/null && hwclock -w /usr/sbin/ntpdate

#Configure apt source
[root@k8s-harbor1 ~]#vim /etc/apt/sources.list
# The source image is annotated by default to improve apt update speed. You can cancel the annotation if necessary
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
# deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
# deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse
# deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-security main restricted universe multiverse
# deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-security main restricted universe multiverse
# Pre release software source, not recommended
# deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-proposed main restricted universe multiverse
# deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-proposed main restricted universe multiverse
[root@k8s-harbor1 ~]#apt update

#Install docker on all master, etcd, harbor and node nodes:
#tar xvf docker-19.03.15-binary-install.tar.gz -C /usr/local/src/
#cd /usr/local/src/
#sh docker-install.sh 
#verification:
#docker version
#docker info
#systemctl status docker

2,harbor:

#1. Install and start harbor:
[root@k8s-harbor1 ~]#mkdir /apps
[root@k8s-harbor1 ~]#tar xvf harbor-offline-installer-v2.3.2.tgz -C /apps/
#Certificate issued:
[root@k8s-harbor1 ~]#cd /apps/harbor/
[root@k8s-harbor1 harbor]#mkdir certs && cd certs
[root@k8s-harbor1 certs]#openssl genrsa -out harbor-ca.key
[root@k8s-harbor2 certs]#openssl req -x509 -new -nodes -key harbor-ca.key -subj "/CN=harbor.magedu.net" -days 7120 -out harbor-ca.crt 
#To modify a profile:
[root@k8s-harbor1 certs]#cd ..
[root@k8s-harbor1 harbor]#cp harbor.yml.tmpl harbor.yml
[root@k8s-harbor1 harbor]#vim harbor.yml
hostname: harbor.magedu.net #Consistent with "/ CN=harbor.magedu.net"
http:
  port: 80
https:
  port: 443
  certificate: /apps/harbor/certs/harbor-ca.crt #Certificate path
  private_key: /apps/harbor/certs/harbor-ca.key #key path
harbor_admin_password: 123456  #password
database:
  password: root123
  max_idle_conns: 100
  max_open_conns: 900
data_volume: /data
#Perform the automatic installation and startup of harbor:
[root@k8s-harbor1 harbor]#./install.sh --with-trivy

#2. Verification:
#Client node access authentication:
[root@k8s-harbor2 ~]#mkdir /etc/docker/certs.d/harbor.magedu.net -p #Synchronization certificate
[root@k8s-harbor1 ~]#scp /apps/harbor/certs/harbor-ca.crt 192.168.150.155:/etc/docker/certs.d/harbor.magedu.net
[root@k8s-harbor2 ~]#vim /etc/hosts #Add host file parsing
192.168.150.154 harbor.magedu.net
[root@k8s-harbor2 ~]#systemctl restart docker #Restart docker
#Test login harbor
[root@k8s-harbor2 ~]#docker login harbor.magedu.net
admin
123456
Login Succeeded
#Browser login verification:
Set up to run local browser access:
C:\Windows\System32\drivers\etc\hosts
192.168.150.154  harbor.magedu.net
 Browser authentication access: https://harbor.magedu.net/

#3. Testing
#Test push image to harbor:
[root@k8s-harbor2 ~]#docker pull alpine
[root@k8s-harbor2 ~]#docker tag alpine harbor.magedu.net/library/alpine
[root@k8s-harbor2 ~]#docker push harbor.magedu.net/library/alpine
#Browser authentication mirror

3,haproxy+keepalived:

[root@k8s-ha1 ~]#apt install keepalived haproxy -y
#keepalived:
[root@k8s-ha1 ~]#cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf
[root@k8s-ha1 ~]#vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
  
global_defs {
   notification_email {
     acassen
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 1
    priority 100
    advert_int 3
        #unicast_src_ip 192.168.150.151
        #unicast_peer {
        #       192.168.150.160
        #}

    authentication {
        auth_type PASS
        auth_pass 123abc
    }
    virtual_ipaddress {
         192.168.150.188 dev eth0 label eth0:1
    }
}

[root@k8s-ha1 ~]#systemctl restart keepalived
[root@k8s-ha1 ~]#systemctl enable keepalived
#Successfully verified vip binding
[root@k8s-ha1 ~]#ifconfig eth0:1
eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.150.188  netmask 255.255.255.255  broadcast 0.0.0.0
        ether 00:0c:29:34:31:13  txqueuelen 1000  (Ethernet)
[root@k8s-master1 ~]#ping 192.168.150.188 #master1 validation
#haproxy: 
#Modify to allow binding of IP addresses that do not exist in this machine
[root@k8s-ha1 ~]#sysctl -a| grep net.ipv4.ip_nonlocal_bind
net.ipv4.ip_nonlocal_bind = 0
[root@k8s-ha1 ~]#echo "1" > /proc/sys/net/ipv4/ip_nonlocal_bind
[root@k8s-ha1 ~]#vim /etc/haproxy/haproxy.cfg 
listen k8s-6443
  bind 192.168.150.188:6443
  mode tcp
  server k8s1 192.168.150.151:6443 check inter 3s fall 3 rise 5
  #server k8s1 192.168.150.152:6443 check inter 3s fall 3 rise 5
  #server k8s1 192.168.150.153:6443 check inter 3s fall 3 rise 5
[root@k8s-ha1 ~]#systemctl restart haproxy
[root@k8s-ha1 ~]#systemctl enable haproxy
[root@k8s-ha1 ~]#ss -ntl
LISTEN    0     491      192.168.150.188:6443      0.0.0.0:*  

4. kubeasz deployment:

#Install ansible
#Install an older version of ansible
[root@k8s-master1 ~]#apt install ansible -y 
#Install a newer version of ansible
[root@k8s-master1 ~]#apt install python3-pip git -y
[root@k8s-master1 ~]#pip3 install ansible -i https://mirrors.aliyun.com/pypi/simple/
[root@k8s-master1 ~]#ansible --version
ansible [core 2.11.6] 
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.8/dist-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible
  python version = 3.8.10 (default, Sep 28 2021, 16:10:42) [GCC 9.3.0]
  jinja version = 2.10.1
  libyaml = True

#Configure key free login
[root@k8s-master1 ~]#ssh-keygen 
[root@k8s-master1 ~]#apt install sshpass
[root@k8s-master1 ~]#vim scp.sh
#!/bin/bash
#Target host list
IP="
192.168.150.151
192.168.150.152
192.168.150.153
192.168.150.154
192.168.150.155
192.168.150.156
192.168.150.157
192.168.150.158
192.168.150.159
192.168.150.160 
192.168.150.161
192.168.150.162
192.168.150.163
" 
for node in ${IP};do
  sshpass -p 123456 ssh-copy-id ${node} -o StrictHostKeyChecking=no
  if [ $? -eq 0 ];then
        echo "${node} secret key copy complete"
  else 
        echo "${node} secret key copy fail"
  fi
done 

#Sync docker certificate script
[root@k8s-master1 ~]#sh scp.sh

#The deployment node downloads deployment items and components
 use master1 As a deployment node
[root@k8s-master1 ~]#export release=3.1.0
[root@k8s-master1 ~]#curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
[root@k8s-master1 ~]#chmod a+x ./ezdown
[root@k8s-master1 ~]#vim ezdown
DOCKER VER=19.03.15
KUBEASZ VER=3.1.0
K8S BINVER=V1.21.0
[root@k8s-master1 ~]#./ezdown -D # Using tool scripts to download
[root@k8s-master1 ~]#ll /etc/kubeasz/down/
total 1245988
drwxr-xr-x  2 root root      4096 Nov 25 09:46 ./
drwxrwxr-x 11 root root       209 Nov 25 09:42 ../
-rw-------  1 root root 451969024 Nov 25 09:44 calico_v3.15.3.tar
-rw-------  1 root root  42592768 Nov 25 09:44 coredns_1.8.0.tar
-rw-------  1 root root 227933696 Nov 25 09:45 dashboard_v2.2.0.tar
-rw-r--r--  1 root root  69158342 Nov 25 09:40 docker-20.10.5.tgz
-rw-------  1 root root  58150912 Nov 25 09:45 flannel_v0.13.0-amd64.tar
-rw-------  1 root root 124833792 Nov 25 09:44 k8s-dns-node-cache_1.17.0.tar
-rw-------  1 root root 179014144 Nov 25 09:46 kubeasz_3.1.0.tar
-rw-------  1 root root  34566656 Nov 25 09:45 metrics-scraper_v1.0.6.tar
-rw-------  1 root root  41199616 Nov 25 09:45 metrics-server_v0.3.6.tar
-rw-------  1 root root  45063680 Nov 25 09:45 nfs-provisioner_v4.0.1.tar
-rw-------  1 root root    692736 Nov 25 09:45 pause.tar
-rw-------  1 root root    692736 Nov 25 09:45 pause_3.4.1.tar

#Generate an ansible host and config YML file and configuration:
[root@k8s-master1 ~]#cd /etc/kubeasz
[root@k8s-master1 kubeasz]#./ezctl new k8s-01
[root@k8s-master1 kubeasz]#vim clusters/k8s-01/hosts 
[root@k8s-master1 kubeasz]#vim clusters/k8s-01/config.yml 

#Deploy k8s cluster
[root@k8s-master1 kubeasz]#./ezctl help 
Usage: ezctl COMMAND [args]
-------------------------------------------------------------------------------------
Cluster setups:
    list		             to list all of the managed clusters
    checkout    <cluster>            to switch default kubeconfig of the cluster
    new         <cluster>            to start a new k8s deploy with name 'cluster'
    setup       <cluster>  <step>    to setup a cluster, also supporting a step-by-step way
    start       <cluster>            to start all of the k8s services stopped by 'ezctl stop'
    stop        <cluster>            to stop all of the k8s services temporarily
    upgrade     <cluster>            to upgrade the k8s cluster
    destroy     <cluster>            to destroy the k8s cluster
    backup      <cluster>            to backup the cluster state (etcd snapshot)
    restore     <cluster>            to restore the cluster state from backups
    start-aio		             to quickly setup an all-in-one cluster with 'default' settings

Cluster ops:
    add-etcd    <cluster>  <ip>      to add a etcd-node to the etcd cluster
    add-master  <cluster>  <ip>      to add a master node to the k8s cluster
    add-node    <cluster>  <ip>      to add a work node to the k8s cluster
    del-etcd    <cluster>  <ip>      to delete a etcd-node from the etcd cluster
    del-master  <cluster>  <ip>      to delete a master node from the k8s cluster
    del-node    <cluster>  <ip>      to delete a work node from the k8s cluster

Extra operation:
    kcfg-adm    <cluster>  <args>    to manage client kubeconfig of the k8s cluster

Use "ezctl help <command>" for more information about a given command.
[root@k8s-master1 kubeasz]#./ezctl help setup   #View step-by-step installation help
Usage: ezctl setup <cluster> <step>
available steps:
    01  prepare            to prepare CA/certs & kubeconfig & other system settings 
    02  etcd               to setup the etcd cluster
    03  container-runtime  to setup the container runtime(docker or containerd)
    04  kube-master        to setup the master nodes
    05  kube-node          to setup the worker nodes
    06  network            to setup the network plugin
    07  cluster-addon      to setup other useful plugins
    90  all                to run 01~07 all at once
    10  ex-lb              to install external loadbalance for accessing k8s from outside
    11  harbor             to install a new harbor server or to integrate with an existed one

examples: ./ezctl setup test-k8s 01  (or ./ezctl setup test-k8s prepare)
	  ./ezctl setup test-k8s 02  (or ./ezctl setup test-k8s etcd)
          ./ezctl setup test-k8s all
          ./ezctl setup test-k8s 04 -t restart_master

[root@k8s-master1 kubeasz]#./ezctl setup k8s-01 01 #Prepare CA and basic system settings
[root@k8s-master1 kubeasz]#./ezctl setup k8s-01 02 #Deploy etcd cluster
#Verify etcd node service status:
[root@k8s-etcd1 ~]#export NODE_IPS="192.168.150.156"
[root@k8s-etcd1 ~]#for ip in ${NODE_IPS}; do
   ETCDCTL_API=3 etcdctl \
   --endpoints=https://${ip}:2379  \
   --cacert=/etc/kubernetes/ssl/ca.pem \
   --cert=/etc/kubernetes/ssl/etcd.pem \
   --key=/etc/kubernetes/ssl/etcd-key.pem \
   endpoint health; done
https://192.168.150.156:2379 is healthy: successfully committed proposal: took = 6.343588ms


#Deploy docker
#Configure harbor client certificate
[root@k8s-master1 ~]#mkdir /etc/docker/certs.d/harbor.magedu.net -p
[root@k8s-harbor1 ~]#scp /apps/harbor/certs/harbor-ca.crt 192.168.150.151:/etc/docker/certs.d/harbor.magedu.net/
[root@k8s-master1 ~]#echo "192.168.150.154 harbor.magedu.net" >> /etc/hosts #Add harbor domain name resolution
[root@k8s-master1 ~]#systemctl restart docker #Restart docker
[root@k8s-master1 ~]#docker login harbor.magedu.net #Verify login to harbor
admin/123456
#!! Error response from daemon: Get https://harbor.magedu.net/v2/: dial tcp 192.168.150.154:443: connect: connection refused
#Reason: harbor is not started. After performing the following operations, login again and display success
[root@k8s-harbor1 harbor]#./install.sh --with-trivy

#Synchronize docker harbor certificate script
[root@k8s-master1 ~]#vim scp_harbor.sh 
#!/bin/bash
#Target host list
IP="
192.168.150.151
192.168.150.152
192.168.150.153
192.168.150.154
192.168.150.155
192.168.150.156
192.168.150.157
192.168.150.158
192.168.150.159
192.168.150.160
192.168.150.161
192.168.150.162
192.168.150.163
"
for node in ${IP};do
  sshpass -p 123456 ssh-copy-id ${node} -o StrictHostKeyChecking=no
  if [ $? -eq 0 ];then
        echo "${node} secret key copy complete"
        echo "${node} secret key copy complete,Ready to initialize....."
           ssh ${node} "mkdir /etc/docker/certs.d/harbor.magedu.net -p"
           echo "Harbor Certificate directory created successfully!"
           scp /etc/docker/certs.d/harbor.magedu.net/harbor-ca.crt ${node}:/etc/docker/certs.d/harbor.magedu.net/harbor-ca.crt
           echo "Harbor Certificate copy succeeded!"
           ssh ${node} "echo "192.168.150.154 harbor.magedu.net">>/etc/hosts"
           echo "host File copy complete"
           #scp -r /root/.docker ${node}:/root/
           #echo "Harbor authentication file copy completed!"
  else
        echo "${node} secret key copy fail"
  fi
done
[root@k8s-master1 ~]#sh scp_harbor.sh 
#Test upload pause image to harbor warehouse:
[root@k8s-master1 kubeasz]#docker pull easzlab/pause-amd64:3.4.1
[root@k8s-master1 kubeasz]#docker tag easzlab/pause-amd64:3.4.1 harbor.magedu.net/baseimages/pause-amd64:3.4.1
#Create project baseimages on harbor
[root@k8s-master1 kubeasz]#docker push harbor.magedu.net/baseimages/pause-amd64:3.4.1

[root@k8s-master1 kubeasz]#vim clusters/k8s-01/config.yml 
48 # [containerd] base container image
49 SANDBOX_IMAGE: "harbor.magedu.net/baseimages/pause-amd64:3.4.1"

#Install docker
[root@k8s-master1 kubeasz]#./ezctl setup k8s-01 03 
#node validation docker:
[root@k8s-node2 ~]#docker version

#Deploy master node
[root@k8s-master1 kubeasz]#./ezctl setup k8s-01 04  
#Authentication server
[root@k8s-master1 ~]#kubectl get node
NAME              STATUS                     ROLES    AGE     VERSION
192.168.150.151   Ready,SchedulingDisabled   master   7m29s   v1.21.0

#Deploy node node
[root@k8s-master1 kubeasz]#./ezctl setup k8s-01 05  
#Authentication server
[root@k8s-master1 ~]#kubectl get node
NAME              STATUS                     ROLES    AGE     VERSION
192.168.150.151   Ready,SchedulingDisabled   master   9m30s   v1.21.0
192.168.150.161   Ready                      node     32s     v1.21.0
192.168.150.162   Ready                      node     32s     v1.21.0

#Deploy network services
[root@k8s-master1 kubeasz]#grep image roles/calico/templates/calico-v3.15.yaml.j2 
          image: calico/cni:v3.15.3
          image: calico/pod2daemon-flexvol:v3.15.3
          image: calico/node:v3.15.3
          image: calico/kube-controllers:v3.15.3
[root@k8s-master1 kubeasz]#docker pull calico/cni:v3.15.3
[root@k8s-master1 kubeasz]#docker tag calico/cni:v3.15.3 harbor.magedu.net/baseimages/calico-cni:v3.15.3
[root@k8s-master1 kubeasz]#docker push harbor.magedu.net/baseimages/calico-cni:v3.15.3

[root@k8s-master1 kubeasz]#docker pull calico/pod2daemon-flexvol:v3.15.3
[root@k8s-master1 kubeasz]#docker tag calico/pod2daemon-flexvol:v3.15.3 harbor.magedu.net/baseimages/calico-pod2daemon-flexvol:v3.15.3
[root@k8s-master1 kubeasz]#docker push harbor.magedu.net/baseimages/calico-pod2daemon-flexvol:v3.15.3

[root@k8s-master1 kubeasz]#docker pull calico/node:v3.15.3
[root@k8s-master1 kubeasz]#docker tag calico/node:v3.15.3 harbor.magedu.net/baseimages/calico-node:v3.15.3
[root@k8s-master1 kubeasz]#docker push harbor.magedu.net/baseimages/calico-node:v3.15.3

[root@k8s-master1 kubeasz]#docker pull calico/kube-controllers:v3.15.3
[root@k8s-master1 kubeasz]#docker tag calico/kube-controllers:v3.15.3 harbor.magedu.net/baseimages/calico-kube-controllers:v3.15.3
[root@k8s-master1 kubeasz]#docker push harbor.magedu.net/baseimages/calico-kube-controllers:v3.15.3

#Modify mirror address
[root@k8s-master1 kubeasz]#vim roles/calico/templates/calico-v3.15.yaml.j2 
[root@k8s-master1 kubeasz]#grep image roles/calico/templates/calico-v3.15.yaml.j2 
          image: harbor.magedu.net/baseimages/calico-cni:v3.15.3
          image: harbor.magedu.net/baseimages/calico-pod2daemon-flexvol:v3.15.3
          image: harbor.magedu.net/baseimages/calico-node:v3.15.3
          image: harbor.magedu.net/baseimages/calico-kube-controllers:v3.15.3

[root@k8s-master1 kubeasz]#./ezctl setup k8s-01 06  
#!! Error report: fatal: [192.168.150.162]: failed! = > {"attempts": 15, "changed": true, "cmd": "/usr/local/bin/kubectl get pod -n kube-system -o wide|grep 'flannel'|grep ' 192.168.150.162 '|awk '{print $3}'", "delta": "0:00:00.094963", "end": "2021-12-15 16:53:27.424853", "msg": "", "rc": 0, "start": "2021-12-15 16:53:27.329890",  "stderr": "", "stderr_lines": [], "stdout": "Init:0/1",  "stdout_lines": ["Init:0/1"]}
#Resolved: ps images and roles / calico / templates / calico-v3 15.yaml. The J2 image address is inconsistent. The modified image is executed successfully

#verification
[root@k8s-master1 kubeasz]#calicoctl node status
Calico process is running.

IPv4 BGP status
+-----------------+-------------------+-------+----------+-------------+
|  PEER ADDRESS   |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+-----------------+-------------------+-------+----------+-------------+
| 192.168.150.161 | node-to-node mesh | up    | 12:15:03 | Established |
| 192.168.150.162 | node-to-node mesh | up    | 12:15:04 | Established |
+-----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

#Verify node routing
[root@k8s-node1 ~]#route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.150.2   0.0.0.0         UG    0      0        0 eth0
10.200.36.64    0.0.0.0         255.255.255.192 U     0      0        0 *
10.200.159.128  192.168.150.151 255.255.255.192 UG    0      0        0 tunl0
10.200.169.128  192.168.150.162 255.255.255.192 UG    0      0        0 tunl0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth0

#Create container to test network communication
[root@k8s-master1 ~]#docker pull alpine
[root@k8s-master1 ~]#docker tag alpine  harbor.magedu.net/baseimages/alpine
[root@k8s-master1 ~]#docker push harbor.magedu.net/baseimages/alpine
[root@k8s-master1 ~]#kubectl run net-test1 --image=harbor.magedu.net/baseimages/alpine sleep 360000
pod/net-test1 created
[root@k8s-master1 ~]#kubectl run net-test2 --image=harbor.magedu.net/baseimages/alpine sleep 360000
pod/net-test2 created
[root@k8s-master1 ~]#kubectl run net-test3 --image=harbor.magedu.net/baseimages/alpine sleep 360000
pod/net-test3 created
[root@k8s-master1 ~]#kubectl get pod -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP               NODE              NOMINATED NODE   READINESS GATES
net-test1   1/1     Running   0          15s   10.200.36.65     192.168.150.161   <none>           <none>
net-test2   1/1     Running   0          9s    10.200.36.66     192.168.150.161   <none>           <none>
net-test3   1/1     Running   0          3s    10.200.169.129   192.168.150.162   <none>           <none>
[root@k8s-master1 ~]#kubectl exec -it net-test1 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ping 10.200.36.66
PING 10.200.36.66 (10.200.36.66): 56 data bytes
64 bytes from 10.200.36.66: seq=0 ttl=63 time=0.182 ms
64 bytes from 10.200.36.66: seq=1 ttl=63 time=0.101 ms
/ # ping 10.200.169.129
PING 10.200.169.129 (10.200.169.129): 56 data bytes
64 bytes from 10.200.169.129: seq=0 ttl=62 time=0.423 ms
64 bytes from 10.200.169.129: seq=1 ttl=62 time=0.632 ms

#Cluster management
[root@k8s-master1 kubeasz]#./ezctl setup k8s-01 07 #What's the use of this?
#Cluster management is mainly used to manage and monitor nodes such as adding a master, adding a node, deleting a master and deleting a node
#Current cluster status:
[root@k8s-master1 ~]#kubectl get node
NAME              STATUS                     ROLES    AGE   VERSION
192.168.150.151   Ready,SchedulingDisabled   master   13h   v1.21.0
192.168.150.161   Ready                      node     13h   v1.21.0
192.168.150.162   Ready                      node     13h   v1.21.0

#Configure the key free login and synchronization docker harbor certificate of the new node
[root@k8s-master1 ~]#sh scp.sh 
[root@k8s-master1 ~]#sh scp_harbor.sh 

#Add master
[root@k8s-master1 kubeasz]#./ezctl add-master k8s-01 192.168.150.152
#The node node automatically adds a master
[root@k8s-node2 ~]#cat /etc/kube-lb/conf/kube-lb.conf

#Add node
[root@k8s-master1 kubeasz]#./ezctl add-node k8s-01 192.168.150.163

#Verify cluster status
[root@k8s-master1 kubeasz]#kubectl get node
NAME              STATUS                     ROLES    AGE     VERSION
192.168.150.151   Ready,SchedulingDisabled   master   14h     v1.21.0
192.168.150.152   Ready,SchedulingDisabled   master   7m22s   v1.21.0
192.168.150.161   Ready                      node     14h     v1.21.0
192.168.150.162   Ready                      node     14h     v1.21.0
192.168.150.163   Ready                      node     2m3s    v1.21.0
#Verify network component calico status
[root@k8s-master1 ~]#calicoctl node status
Calico process is running.

IPv4 BGP status
+-----------------+-------------------+-------+----------+-------------+
|  PEER ADDRESS   |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+-----------------+-------------------+-------+----------+-------------+
| 192.168.150.152 | node-to-node mesh | up    | 03:20:49 | Established |
| 192.168.150.161 | node-to-node mesh | up    | 03:21:33 | Established |
| 192.168.150.162 | node-to-node mesh | up    | 03:22:24 | Established |
| 192.168.150.163 | node-to-node mesh | up    | 03:23:28 | Established |
+-----------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.
#Verify node routing
[root@k8s-node3 ~]#route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.150.2   0.0.0.0         UG    0      0        0 eth0
10.200.36.64    192.168.150.161 255.255.255.192 UG    0      0        0 tunl0
10.200.107.192  0.0.0.0         255.255.255.192 U     0      0        0 *
10.200.159.128  192.168.150.151 255.255.255.192 UG    0      0        0 tunl0
10.200.169.128  192.168.150.162 255.255.255.192 UG    0      0        0 tunl0
10.200.224.0    192.168.150.152 255.255.255.192 UG    0      0        0 tunl0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth0

#How to reinstall if it's wrong
[root@k8s-master1 kubeasz]#./ezctl destroy k8s-01

5,dns:

Custom DNS Service:
https://kubernetes.io/zh/docs/tasks/administer-cluster/dns-custom-nameservers/
It is responsible for providing DNS services for the whole cluster, so as to realize the access between services.
Used to resolve k8s the IP address corresponding to the service name in the cluster.
Sky DNS: before the first generation 1.2
Kube DNS: not supported after 1.18
coredns: used in version 1.18 and later

coredns:

https://github.com/coredns/coredns #Older version
https://github.com/coredns/deployment/tree/master/kubernetes #Historical version + latest version

#Download k8s related components location:
https://github.com/kubernetes/kubernetes–>releases–>CHANGELOG–>Downloads for v1.21.0:
kubernetes.tar.gz
kubernetes-client-linux-amd64.tar.gz
kubernetes-node-linux-amd64.tar.gz
kubernetes-server-linux-amd64.tar.gz

[root@k8s-master1 ~]#tar xf kubernetes-client-linux-amd64.tar.gz -C /usr/local/src/
[root@k8s-master1 ~]#tar xf kubernetes-server-linux-amd64.tar.gz -C /usr/local/src/
[root@k8s-master1 ~]#tar xf kubernetes.tar.gz -C /usr/local/src/
[root@k8s-master1 ~]#tar xf kubernetes-node-linux-amd64.tar.gz -C /usr/local/src/
[root@k8s-master1 ~]#cd /usr/local/src/
[root@k8s-master1 src]#ll kubernetes/cluster/addons/dashboard/
total 24
drwxr-xr-x  2 root root   81 Dec  9  2020 ./
drwxr-xr-x 20 root root 4096 Dec  9  2020 ../
-rw-r--r--  1 root root  242 Dec  9  2020 MAINTAINERS.md
-rw-r--r--  1 root root  147 Dec  9  2020 OWNERS
-rw-r--r--  1 root root  281 Dec  9  2020 README.md
-rw-r--r--  1 root root 6878 Dec  9  2020 dashboard.yaml
[root@k8s-master1 src]#ll kubernetes/cluster/addons/dns
total 8
drwxr-xr-x  5 root root   71 Dec  9  2020 ./
drwxr-xr-x 20 root root 4096 Dec  9  2020 ../
-rw-r--r--  1 root root  129 Dec  9  2020 OWNERS
drwxr-xr-x  2 root root  147 Dec  9  2020 coredns/
drwxr-xr-x  2 root root  167 Dec  9  2020 kube-dns/
drwxr-xr-x  2 root root   48 Dec  9  2020 nodelocaldns/
[root@k8s-master1 src]#cd kubernetes/cluster/addons/dns/coredns/
[root@k8s-master1 coredns]#cp coredns.yaml.base /root/
[root@k8s-master1 coredns]#cd
[root@k8s-master1 ~]#mv coredns.yaml.base coredns-n56.yaml

[root@k8s-master1 ~]#vim coredns-n56.yaml 
70  kubernetes magedu.local in-addr.arpa ip6.arpa  #“__DNS__DOMAIN__” Must be the same as cluster in the file / etc/kubeasz/clusters/k8s-01/hosts_ DNS_ Domain = "magedu. Local" consistent
135   image: harbor.magedu.net/baseimages/coredns:v1.8.3
139    memory: 256Mi  #Maximum memory, preferably larger
205   clusterIP: 10.100.0.2  #Service in file / etc/kubeasz/clusters/k8s-01/hosts_ CIDR = "10.100.0.0/16", then the value is 10.100 zero point two
#Configure Prometheus port exposure:
spec:
  type: NodePort
  ports:
    targetPort: 9153
    nodePort: 30009

#Verified to be 10.100 zero point two
[root@k8s-master1 ~]#kubectl exec -it net-test1 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # cat /etc/resolv.conf 
nameserver 10.100.0.2
search default.svc.magedu.local svc.magedu.local magedu.local
options ndots:5

#How to write query yaml file
[root@k8s-master1 ~]#kubectl explain service.spec.ports

#Find a machine to download the image file
#docker pull k8s.gcr.io/coredns/coredns:v1.8.3
#docker save k8s.gcr.io/coredns/coredns:v1.8.3 > coredns-v1.8.3.tar.gz
#Upload to master node
[root@k8s-master1 ~]#docker load -i coredns-image-v1.8.3.tar.gz 
85c53e1bd74e: Loading layer [==================================================>]  43.29MB/43.29MB
Loaded image: k8s.gcr.io/coredns/coredns:v1.8.3
[root@k8s-master1 ~]#docker tag k8s.gcr.io/coredns/coredns:v1.8.3 harbor.magedu.net/baseimages/coredns:v1.8.3
#Upload to harbor image warehouse
[root@k8s-master1 ~]#docker push harbor.magedu.net/baseimages/coredns:v1.8.3
#Modify coredns-n56 The original image address of yaml is the harbor Address
135   image: harbor.magedu.net/baseimages/coredns:v1.8.3

#Error demonstration
[root@k8s-master1 ~]#kubectl apply -f coredns-n56.yaml 
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
[root@k8s-master1 ~]#kubectl get pod -A #The official yaml file is 1.8 0, but the mirror is 1.8 3
kube-system   coredns-7578b5687d-xbzsp                  0/1     Running            0          2m47s
[root@k8s-master1 ~]#kubectl delete -f coredns-n56.yaml 

#Correct demonstration
#Use Jacko's 1.8 3 configuration file
[root@k8s-master1 ~]#kubectl apply -f coredns-n56.yaml 
serviceaccount/coredns configured
clusterrole.rbac.authorization.k8s.io/system:coredns configured
clusterrolebinding.rbac.authorization.k8s.io/system:coredns configured
configmap/coredns configured
deployment.apps/coredns configured
service/kube-dns configured
[root@k8s-master1 ~]#kubectl get pod -A
kube-system   coredns-6b68dbb944-9cpxp                  1/1     Running            0          13s

#Test external network communication
[root@k8s-master1 ~]#kubectl exec -it net-test1 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ping www.baidu.com
PING www.baidu.com (103.235.46.39): 56 data bytes
64 bytes from 103.235.46.39: seq=0 ttl=127 time=176.776 ms
64 bytes from 103.235.46.39: seq=1 ttl=127 time=173.360 ms
/ # ping kubernetes
PING kubernetes (10.100.0.1): 56 data bytes
64 bytes from 10.100.0.1: seq=0 ttl=64 time=1.079 ms
64 bytes from 10.100.0.1: seq=1 ttl=64 time=0.104 ms

[root@k8s-master1 ~]#kubectl get svc -A
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                                    AGE
default       kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP                                    37h
kube-system   kube-dns     NodePort    10.100.0.2   <none>        53:54724/UDP,53:54724/TCP,9153:30009/TCP   25m

#Verify corndns index data:
http://192.168.150.161:30009/metrics 
http://192.168.150.162:30009/metrics 
http://192.168.150.163:30009/metrics 
coredns. Main configuration parameters of yaml
error: #The error log is output to stdout.
health: #The health of CoreDNS is reported as http://localhost:8080/health
cache: #Enable coredns caching.
reload: #Configuration automatically reloads the configuration file. If the configuration of ConfigMap is modified, it will take effect in two minutes.
1oadbalance: #Multiple records of a domain name will be polled and resolved.
cache 30#Cache time
kubernetes: #CoreDNs will perform domain name resolution in Kubernetes SVC according to the specified service domain name.
forward: #Not all domain name queries in the Kubernetes cluster domain are forwarded to the specified server (/ etc/resolv.conf)
prometheus: #The index data of CoreDNs can be accessed by Prometheus http://coredns svc:9153/metrics for collection.
ready: #After the coredns service is started, status monitoring will be carried out, and a URL path of / ready will return 200 status code, otherwise an error will be returned.

6,dashboard:

git address: https://github.com/kubernetes/dashboard
Official website: https://kubernetes.io/zh/docs/tasks/access-application-cluster/web-ui-dashboard/
Dashboard is a web-based Kubernetes user interface, which can obtain the overview information of applications running in the cluster, create or modify Kubernetes resources (such as Deployment.Job.DaemonSet, etc.), elastically scale Deployment, initiate rolling upgrade, restart Pod, or use the wizard to create new applications.

Ensure version compatibility k8s:
https://github.com/kubernetes/dashboard–>releases–>

#Two mirrors
kubernetesui/dashboard:v2.4.0
kubernetesui/metrics-scraper:v1.0.7
[root@k8s-master1 ~]#wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
[root@k8s-master1 ~]#docker pull kubernetesui/dashboard:v2.4.0
[root@k8s-master1 ~]#docker tag kubernetesui/dashboard:v2.4.0 harbor.magedu.net/baseimages/dashboard:v2.4.0
[root@k8s-master1 ~]#docker push harbor.magedu.net/baseimages/dashboard:v2.4.0
[root@k8s-master1 ~]#docker pull kubernetesui/metrics-scraper:v1.0.7
[root@k8s-master1 ~]#docker tag kubernetesui/metrics-scraper:v1.0.7 harbor.magedu.net/baseimages/metrics-scraper:v1.0.7
[root@k8s-master1 ~]#docker push harbor.magedu.net/baseimages/metrics-scraper:v1.0.7

[root@k8s-master1 ~]#vim recommended.yaml
190    image: harbor.magedu.net/baseimages/dashboard:v2.4.0
274    image: harbor.magedu.net/baseimages/metrics-scraper:v1.0.7
#Exposed port
spec:
  type: NodePort
  ports:
    targetPort: 8443
    nodePort: 30002

[root@k8s-master1 ~]#kubectl apply -f recommended.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

#Visit the dashboard web page:
https://192.168.150.161:30002/

#Create super administrator permission to get token login dashboard:
[root@k8s-master1 ~]#kubectl apply -f admin-user.yml 
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
[root@k8s-master1 ~]#kubectl get secrets -A |grep admin
kubernetes-dashboard   admin-user-token-2w2wj                           kubernetes.io/service-account-token   3      61s
[root@k8s-master1 ~]#kubectl describe secrets admin-user-token-2w2wj -n kubernetes-dashboard
Name:         admin-user-token-2w2wj
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 9f60fa54-916c-49fb-a00f-96303eb3af88

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1350 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlpGeHU5ZDVtbGFMTG03bkl1UGxYaVVtWjBtcXgtTVA0Z0NLT1c3UWVvX0kifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTJ3MndqIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5ZjYwZmE1NC05MTZjLTQ5ZmItYTAwZi05NjMwM2ViM2FmODgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.SKrxRP5IZ21nwsIML3Ay8DEnewJiHihuxndqE1Z3-Dmx7Rk4r6uD-qH6vspCsbZkD87T75FOOdbSIu-LdBwUR9RSjj_ck2Yt8A_7zloWcBMg3rQ3zKcuGcf1vQpu8OpwNtXmHA3u0BYLXcBP4jk1VWBOXJrQbZ47lx-OSRjbc-W2MAmaP9fNvZZseg_ckzKWfpVFJEr0l4PE2IeIG37RNeJOMzDGUJlCg2zMmjXcbYTvuZdWl9c0Zi1RdXP4AA4IaH9ZVvURIAr39xzkKLqqDh3AVM_duqg-T7HNKOildRvx03scBpk87mh5IFkO1ImeRQfGy2kGfsfI3p4gp1ef2w
#Log in with this token
#Set the duration of tonken login session
[root@k8s-master1 ~]#kubectl apply -f dashboard-v2.3.1.yaml

#Log in using kubeconfig

Keywords: Kubernetes Cloud Native

Added by btherl on Fri, 24 Dec 2021 06:16:54 +0200