k8s deployment -- node component deployment

Introduction to kubelet components

  • kubernetes is a distributed cluster management system. Every node needs to run a worker to manage the life cycle of the container. The worker program is kubelet
  • The main function of kubelet is to obtain the desired state of pod/container on a node (what container to run, the number of copies to run, how to configure the network or storage, etc.) from a certain place and call the corresponding container platform interface to achieve this state.

kubelet component features

  • Regularly report the status of the current node to apiserver for scheduling
  • Clean up the image and container to ensure that the image on the node does not occupy the full disk space and the exiting container does not occupy too many resources
  • Run HTTP Server to provide external node and pod information. If in debug mode, it also includes debugging information
  • Wait...

Main functions of kubelet

  • Pod management
  • Container health check
  • Container monitoring

Introduction to the Kube proxy component

  • Implement Pod network agent on node, maintain network planning and four layer load balancing

Experimental deployment

Experimental environment

  • Master01:192.168.80.12
  • Node01:192.168.80.13
  • Node02:192.168.80.14
  • This experimental deployment is continued from the previous article, master node deployment, and the experimental environment remains the same. This article mainly deploys the kubelet component and Kube proxy component in the node node

kubelet component deployment

  • master01 server operation
    [root@master01 k8s]# cd /root/k8s/kubernetes/server/bin / / enter the command directory of the unzipped software
    [root@master01 bin]# ls
    apiextensions-apiserver              kube-apiserver.docker_tag           kube-proxy
    cloud-controller-manager             kube-apiserver.tar                  kube-proxy.docker_tag
    cloud-controller-manager.docker_tag  kube-controller-manager             kube-proxy.tar
    cloud-controller-manager.tar         kube-controller-manager.docker_tag  kube-scheduler
    hyperkube                            kube-controller-manager.tar         kube-scheduler.docker_tag
    kubeadm                              kubectl                             kube-scheduler.tar
    kube-apiserver                       kubelet                             mounter
    [root@master01 bin]# SCP kubelet Kube proxy root @ 192.168.80.13: / opt / kubernetes / bin / / / copy kubelet and Kube proxy to node
    root@192.168.80.13's password:
    kubelet                                                                    100%  168MB  91.4MB/s   00:01
    kube-proxy                                                                 100%   48MB  71.8MB/s   00:00
    [root@master01 bin]# scp kubelet kube-proxy root@192.168.80.14:/opt/kubernetes/bin/
    root@192.168.80.14's password:
    kubelet                                                                    100%  168MB 122.5MB/s   00:01
    kube-proxy                                                                 100%   48MB  95.2MB/s   00:00
    [root@master01 bin]# scp /mnt/node.zip root@192.168.80.13:/root / / copy the compressed file mounted by the host to node node01
    root@192.168.80.13's password:
    node.zip                                                                   100% 1240     4.1KB/s     00:00
  • node01 node operation
    [root@node01 ~]# ls
    anaconda-ks.cfg  flannel.sh  flannel-v0.10.0-linux-amd64.tar.gz  node.zip  README.md
    [root@node01 ~]# unzip node.zip / / decompress the package
    Archive:  node.zip
    inflating: proxy.sh
    inflating: kubelet.sh
  • master01 node operation

    [root@master01 bin]# cd /root/k8s/
    [root@master01 k8s]# mkdir kubeconfig / / create the configuration file directory
    [root@master01 k8s]# cd kubeconfig
    [root@master01 kubeconfig]# cp /mnt/kubeconfig.sh /root/k8s/kubeconfig / / / copy the script to the configuration file directory
    [root@master01 kubeconfig]# mv kubeconfig.sh kubeconfig / / rename
    [root@master01 kubeconfig]# vim kubeconfig / / edit file
    # Create TLS Bootstrapping Token
    #BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
    BOOTSTRAP_TOKEN=0fb61c46f8991b718eb38d27b605b008
    
    cat > token.csv <<EOF
    ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
    EOF
    //Delete this part
    ...
    :wq
    [root@master01 kubeconfig]# cat /opt/kubernetes/cfg/token.csv / / check the token file to get the serial number
    c37758077defd4033bfe95a071689272,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
    [root@master01 kubeconfig]# vim kubeconfig
    ...
    # Set client authentication parameters
    kubectl config set-credentials kubelet-bootstrap \
    --token=c37758077defd4033bfe95a071689272 \            //Change to tokenID change variable to acquired serial number
    --kubeconfig=bootstrap.kubeconfig
    ...
    :wq
    [root@master01 kubeconfig]# vim /etc/profile / / edit the file to set the environment variable
    ...
    export PATH=$PATH:/opt/kubernetes/bin/
    :wq
    [root@master01 kubeconfig]# source /etc/profile / / re execute the file
    [root@master01 kubeconfig]# kubectl get cs / / check the cluster status and make sure the cluster is normal
    NAME                 STATUS    MESSAGE             ERROR
    scheduler            Healthy   ok
    controller-manager   Healthy   ok
    etcd-0               Healthy   {"health":"true"}
    etcd-1               Healthy   {"health":"true"}
    etcd-2               Healthy   {"health":"true"}
    [root@master01 kubeconfig]# Bash kubeconfig 192.168.80.12 / root / k8s / k8s Cert / / / use the command to generate the configuration file
    Cluster "kubernetes" set.
    User "kubelet-bootstrap" set.
    Context "default" created.
    Switched to context "default".
    Cluster "kubernetes" set.
    User "kube-proxy" set.
    Context "default" created.
    Switched to context "default".
    [root@master01 kubeconfig]# ls
    bootstrap.kubeconfig  kubeconfig  kube-proxy.kubeconfig         //Generate two profiles
    [root@master01 kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.80.13:/opt/kubernetes/cfg / / / copy the generated configuration file to the node
    root@192.168.80.13's password:
    bootstrap.kubeconfig                                                       100% 2167     1.1MB/s   00:00
    kube-proxy.kubeconfig                                                      100% 6269     7.1MB/s   00:00
    [root@master01 kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.80.14:/opt/kubernetes/cfg/
    root@192.168.80.14's password:
    bootstrap.kubeconfig                                                       100% 2167     1.6MB/s   00:00
    kube-proxy.kubeconfig                                                      100% 6269     4.5MB/s   00:00
    [root@master01 kubeconfig]# Kubectl create clusterrolebeinding kubelet bootstrap -- clusterrole = system: node bootstrap -- user = kubelet bootstrap / / create a bootstrap role to give permission to connect to the apiserver to request signature (key point)
    clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
  • node01 node operation
    [root@node01 ~]# ls /opt/kubernetes/cfg / / / check whether the copy is successful
    bootstrap.kubeconfig  flanneld  kube-proxy.kubeconfig
    [root@node01 ~]# bash kubelet.sh 192.168.80.13 / / execute the script file to generate kubelet's configuration file and startup script
    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
    [root@node01 ~]# systemctl status kubelet.service / / check whether the service is started
    ● kubelet.service - Kubernetes Kubelet
    Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
    Active: active (running) since One 2020-02-10 14:17:12 CST; 1min 45s ago      //Successful operation
    Main PID: 79678 (kubelet)
    Memory: 14.2M
    ...
  • master01 server operation
    [root@master01 kubeconfig]# kubectl get csr / / check whether node01 node requests certificate
    NAME                                                   AGE     REQUESTOR           CONDITION
    node-csr-WQGufSR06MTCWv0Neu0AexyqBZ1UgFDM1qdSziNEq_w   3m16s   kubelet-bootstrap   Pending
    [root@master01 kubeconfig]# Kubectl certificate approve node-csr-wqgufsr06mtcw0neu0aexyqbz1ugfdm1qdszineq? W / / agree to the self sign node01 node self sign request
    certificatesigningrequest.certificates.k8s.io/node-csr-WQGufSR06MTCWv0Neu0AexyqBZ1UgFDM1qdSziNEq_w approved
    [root@master01 kubeconfig]# kubectl get csr / / check the request status again after you agree
    NAME                                                   AGE     REQUESTOR           CONDITION
    node-csr-WQGufSR06MTCWv0Neu0AexyqBZ1UgFDM1qdSziNEq_w   4m40s   kubelet-bootstrap   Approved,Issued   //Has been allowed to join the cluster
    [root@master01 kubeconfig]# kubectl get node / / view the cluster node and successfully join node01
    NAME            STATUS   ROLES    AGE   VERSION
    192.168.80.13   Ready    <none>   78s   v1.12.3
  • node01 node operation
    [root@node01 ~]# bash proxy.sh 192.168.80.13 / / execute the script file, start the Kube proxy service and generate the configuration file
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/ systemd/system/kube-proxy.service.
    [root@node01 ~]# systemctl status kube-proxy.service / / check whether the service is started
    ● kube-proxy.service - Kubernetes Proxy
    Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
    Active: active (running) since One 2020-02-10 14:23:59 CST; 1min 2s ago   //Successful startup
    Main PID: 80889 (kube-proxy)
    ...
    [root@node01 ~]# scp -r /opt/kubernetes/ root@192.168.80.14:/opt / / / copy the existing / opt/kubernetes directory to node02 node for modification
    The authenticity of host '192.168.80.14 (192.168.80.14)' can't be established.
    ECDSA key fingerprint is SHA256:Ih0NpZxfLb+MOEFW8B+ZsQ5R8Il2Sx8dlNov632cFlo.
    ECDSA key fingerprint is MD5:a9:ee:e5:cc:40:c7:9e:24:5b:c1:cd:c1:7b:31:42:0f.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added '192.168.80.14' (ECDSA) to the list of known hosts.
    root@192.168.80.14's password:
    flanneld                                                                   100%  235   139.5KB/s   00:00
    bootstrap.kubeconfig                                                       100% 2167     4.6MB/s   00:00
    kube-proxy.kubeconfig                                                      100% 6269    14.2MB/s   00:00
    kubelet                                                                    100%  377   430.7KB/s   00:00
    kubelet.config                                                             100%  267   262.3KB/s   00:00
    kubelet.kubeconfig                                                         100% 2296     3.3MB/s   00:00
    kube-proxy                                                                 100%  189   299.2KB/s   00:00
    mk-docker-opts.sh                                                          100% 2139     2.3MB/s   00:00
    scp: /opt//kubernetes/bin/flanneld: Text file busy
    kubelet                                                                    100%  168MB 134.1MB/s   00:01
    kube-proxy                                                                 100%   48MB 129.8MB/s   00:00
    kubelet.crt                                                                100% 2185     3.3MB/s   00:00
    kubelet.key                                                                100% 1675     2.8MB/s   00:00
    kubelet-client-2020-02-10-14-21-18.pem                                     100% 1273   608.4KB/s   00:00
    kubelet-client-current.pem                                                 100% 1273   404.9KB/s   00:00
    [root@node01 ~]# SCP / usr / lib / SYSTEMd / system / {kubelet, Kube proxy}. Service root @ 192.168.80.14: / usr / lib / SYSTEMd / system / / / copy the service files of kubelet and Kube proxy to node2
    root@192.168.80.14's password:
    kubelet.service                                                            100%  264   350.1KB/s   00:00
    kube-proxy.service                                                         100%  231   341.5KB/s    00:00
  • Operation on node02
    [root@node02 ~]# cd /opt/kubernetes/ssl / / / enter the certificate directory copied from node01 node
    [root@node02 ssl]# rm -rf * / / delete the certificate, and we will re apply for the certificate later
    [root@node02 ssl]# CD.. / CFG / / / enter the configuration file directory
    [root@node02 cfg]# vim kubelet / / modify file
    KUBELET_OPTS="--logtostderr=true \
    --v=4 \
    --hostname-override=192.168.80.14 \        //Modify IP address
    --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
    --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
    --config=/opt/kubernetes/cfg/kubelet.config \
    --cert-dir=/opt/kubernetes/ssl \
    --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
    :wq
    [root@node02 cfg]# vim kubelet.config / / modify the configuration file
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    address: 192.168.80.14                       //Modify IP address
    port: 10250
    readOnlyPort: 10255
    cgroupDriver: cgroupfs
    clusterDNS:
    - 10.0.0.2
    clusterDomain: cluster.local.
    failSwapOn: false
    authentication:
    anonymous:
    enabled: true
    :wq
    [root@node02 cfg]# VIM Kube proxy / / modify the Kube proxy configuration file
    KUBE_PROXY_OPTS="--logtostderr=true \
    --v=4 \
    --hostname-override=192.168.80.14 \      //Modify IP address
    --cluster-cidr=10.0.0.0/24 \
    --proxy-mode=ipvs \
    --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
    :wq
    [root@node02 cfg]# systemctl start kubelet.service / / start the service
    [root@node02 cfg]# systemctl enable kubelet.service / / set boot to start automatically
    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
    [root@node02 cfg]# systemctl start kube-proxy.service / / start the service
    [root@node02 cfg]# systemctl enable kube-proxy.service / / set boot to start automatically
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/ systemd/system/kube-proxy.service.
  • master01 node operation
    [root@master01 kubeconfig]# kubectl get csr / / view node request
    NAME                                                   AGE     REQUESTOR           CONDITION
    node-csr-WQGufSR06MTCWv0Neu0AexyqBZ1UgFDM1qdSziNEq_w   22m     kubelet-bootstrap   Approved,Issued
    node-csr-jUI3h8Ae2tC5OmihpylXEVlMiJnNO117Z1OgpopxAA0   4m54s   kubelet-bootstrap   Pending    //Wait for the cluster to issue a certificate to the node
    [root@master01 kubeconfig]# Kubectl certificate approve node-csr-juni3h8ae2tc5omihpylxevlmijno117z1ogpopxaa0 / / use the command to authorize to join the cluster
    certificatesigningrequest.certificates.k8s.io/node-csr-jUI3h8Ae2tC5OmihpylXEVlMiJnNO117Z1OgpopxAA0 approved
    [root@master01 kubeconfig]# kubectl get csr / / view the node request again
    NAME                                                   AGE     REQUESTOR           CONDITION
    node-csr-WQGufSR06MTCWv0Neu0AexyqBZ1UgFDM1qdSziNEq_w   23m     kubelet-bootstrap   Approved,Issued
    node-csr-jUI3h8Ae2tC5OmihpylXEVlMiJnNO117Z1OgpopxAA0   5m58s   kubelet-bootstrap   Approved,Issued   //Successfully joined
    [root@master01 kubeconfig]# kubectl get node / / view the nodes in the cluster
    NAME            STATUS   ROLES    AGE   VERSION
    192.168.80.13   Ready    <none>   20m   v1.12.3
    192.168.80.14   Ready    <none>   76s   v1.12.3   //Successfully joined the node

    node deployment completed

Keywords: kubelet Kubernetes vim SSL

Added by gioahmad on Mon, 10 Feb 2020 10:19:01 +0200