Installing kuberneters using kubeadm

k8s installation

Kubedm / kubectl / kubelet installation

  • 1. Update apt package index and install packages required to use Kubernetes apt repository

    sudo apt-get update
    sudo apt-get install -y apt-transport-https ca-certificates curl
    
  • 2. Download Google Cloud public signature key:

    sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
    

    If the execution fails, you can download it manually https://packages.cloud.google.com/apt/doc/apt-key.gpg , and then the downloaded apt key Copy GPG to / usr / share / Keyrings / kubernetes archive keyring gpg

  • 3. Add Kubernetes apt warehouse:

    echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
    

    If you can't access the Internet scientifically, change to the following one

    echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
    
  • 4. Update apt package index, install kubelet, kubedm and kubectl, and lock their versions

    sudo apt-get update
    sudo apt-get install -y kubelet kubeadm kubectl
    sudo apt-mark hold kubelet kubeadm kubectl
    

Installing Kubernetes clusters using kubedm

Initialize the master node
kubeadm init

Because canal needs to be used, the network configuration parameter needs to be added during initialization. Set the subnet of kubernetes to 10.244.0.0/16. Note that it should not be modified to other addresses here, because this value should be consistent with the yaml value of subsequent canal. If it is modified, please modify it together.

The alicloud image is used, otherwise the external network image cannot be pulled down

kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
Problems encountered by init
  • Q1: kubelet isn't running

    It seems like the kubelet isn't running or healthy.
    [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
    

    Solution: modify / etc / docker / daemon JSON, add the following:

    {
        "exec-opts": ["native.cgroupdriver=systemd"]
    }
    

    Then in execution

     sudo systemctl daemon-reload
     sudo systemctl restart docker
     sudo systemctl restart kubelet
    
  • Q2: error execution phase preflight: [preflight] Some fatal errors occurred

    [init] Using Kubernetes version: v1.23.2
    [preflight] Running pre-flight checks
    error execution phase preflight: [preflight] Some fatal errors occurred:
            [ERROR Port-6443]: Port 6443 is in use
            [ERROR Port-10259]: Port 10259 is in use
            [ERROR Port-10257]: Port 10257 is in use
            [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
            [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
            [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
            [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
            [ERROR Port-10250]: Port 10250 is in use
            [ERROR Port-2379]: Port 2379 is in use
            [ERROR Port-2380]: Port 2380 is in use
            [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
    [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
    

    Solution: you need to execute the following command

    kubeadm reset     
    #The following relevant input y is enough
    

    After reset, execute kubedm init again

    kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
    

Prompt after init succeeds:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.16.4:6443 --token e14627.cbl6ghqr2wdi6vt3 \
        --discovery-token-ca-cert-hash sha256:929611f9888cff770c02888f9d02d7e8a4cf121641885a3a78219567127f9593
Configure kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now you can execute the kubectl command

root@VM-16-4-ubuntu:~# kubectl get node
NAME             STATUS     ROLES                  AGE    VERSION
vm-16-4-ubuntu   NotReady   control-plane,master   125m   v1.23.2
Slave nodes join the cluster
kubeadm join

Prompt for successful installation of init, and join the node to the cluster

kubeadm join 10.0.16.4:6443 --token e14627.cbl6ghqr2wdi6vt3 \
        --discovery-token-ca-cert-hash sha256:929611f9888cff770c02888f9d02d7e8a4cf121641885a3a78219567127f9593
Problems encountered by join
  • [preflight] Some fatal errors occurred

    [preflight] Running pre-flight checks
    error execution phase preflight: [preflight] Some fatal errors occurred:
            [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
            [ERROR Port-10250]: Port 10250 is in use
            [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
    [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
    To see the stack trace of this error execute with --v=5 or higher
    

    This is because the cluster or init has been joined once before. If you want to join again, execute

    kubeadm reset  
    

    Then you can join once

Let the master also run pod
kubectl taint nodes --all node-role.kubernetes.io/master-
Install network plug-in

After the cluster is installed, it is found that the node of the master is not ready and the core DNS is in pending state. This is because the kubedns component needs to be installed automatically after the network plug-in is installed

root@VM-16-4-ubuntu:/usr/local/bin# kubectl get pod -A
NAMESPACE     NAME                                     READY   STATUS    RESTARTS   AGE
kube-system   coredns-65c54cc984-9x9zs                 0/1     Pending   0          139m
kube-system   coredns-65c54cc984-gj9c7                 0/1     Pending   0          139m
kube-system   etcd-vm-16-4-ubuntu                      1/1     Running   0          139m
kube-system   kube-apiserver-vm-16-4-ubuntu            1/1     Running   0          139m
kube-system   kube-controller-manager-vm-16-4-ubuntu   1/1     Running   0          139m
kube-system   kube-proxy-m4jlm                         1/1     Running   0          139m
kube-system   kube-scheduler-vm-16-4-ubuntu            1/1     Running   0          139m

The following network plug-ins are currently available:

  • Flannel: an overlay network provider for Kuberneters
  • Calico: it is a secure L3 network and network policy driven
  • Canal: combine Flannel and Calico to provide network and network strategy
  • Weave: provides network and network policies that work at both ends of the network packet, and does not require additional databases

For more information, please visit the official website documentation: https://kubernetes.io/zh/docs/concepts/cluster-administration/addons/

Install Flannel
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

You can see the following feedback:

root@VM-16-4-ubuntu:~# kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

Then check the status of the node and find that the master node has become Ready

root@VM-16-4-ubuntu:~# kubectl get node
NAME             STATUS   ROLES                  AGE    VERSION
vm-16-4-ubuntu   Ready    control-plane,master   3d1h   v1.23.2

Keywords: Docker Kubernetes Container

Added by greenie2600 on Tue, 25 Jan 2022 23:51:23 +0200