Kubernetes Installation and Deployment Master

Links to the original text: https://blog.csdn.net/jiangbenchu/article/details/90769198

1. Modify the local/etc/hosts file
# Add (*) to / etc/hosts file

cat <<EOF >> /etc/hosts
172.26.48.4    k8s-master
172.26.48.5    k8s-node1
172.26.135.94  k8s-node2
EOF

2. CentOS 7 Configuration of Domestic Ali Cloud Mirror Source
# Replace (>) with / etc/yum.repos.d/kubernetes.repo file

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

3. Close SELinux to allow the container to interact with the native file system.

setenforce 0

setenforce: SELinux is disabled

systemctl daemon-reload

4. Modify Network Open Bridge Network Support for (RHEL/CentOS 7) Systems only

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
sysctl --system

5. Shut down swap - there are problems with not closing the configuration node or configuring the master

swapoff -a

6. Install ebtables ethtool, otherwise an error will occur when kubeadm init is executed later

yum install ebtables ethtool -y

# Then modify the current kernel status. This file appears only after the Docker installation is successful.

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

7. Install kubelet, kubeadm, kubectl

yum install -y kubelet kubeadm kubectl
yum install -y kubelet-1.14.2 kubeadm-1.14.2 kubectl-1.14.2
systemctl enable kubelet && systemctl start kubelet

8. Mirror preparation
kubernetes service startup relies on many mirrors, but these mirrors can't be downloaded if they don't turn over the wall in China. Here we can go to Docker Hub to download the image substitution of the specified version. After downloading, we can use docker tag. The command can be changed to a mirror of the specified name.

kubeadm config images list

I0524 22:03:10.774681 19610 version.go:96] could not fetch a
Kubernetes version from the internet: unable to get URL
"https://dl.k8s.io/release/stable-1.txt": Get
https://dl.k8s.io/release/stable-1.txt: net/http: request canceled
while waiting for connection (Client.Timeout exceeded while awaiting
headers) I0524 22:03:10.774766 19610 version.go:97] falling back to
the local client version: v1.14.2 k8s.gcr.io/kube-apiserver:v1.14.2
k8s.gcr.io/kube-controller-manager:v1.14.2
k8s.gcr.io/kube-scheduler:v1.14.2 k8s.gcr.io/kube-proxy:v1.14.2
k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/coredns:1.3.1

9. Create the file setup_image.sh, write scripts to download the image in batches, and modify the image tag to match google's k8s image name.

#!/bin/bash
# Define an array of mirror sets
images=(
    kube-apiserver:v1.14.2
    kube-controller-manager:v1.14.2
    kube-scheduler:v1.14.2
    kube-proxy:v1.14.2
    pause:3.1
    etcd:3.3.10
)
# Loop download images from domestic Docker image library https://hub.docker.com
for img in ${images[@]};
do
    # Download Mirrors from Domestic Sources
    docker pull mirrorgooglecontainers/$img
    # Change the image name
    docker tag  mirrorgooglecontainers/$img k8s.gcr.io/$img
    # Delete the source image
    docker rmi  mirrorgooglecontainers/$img
    #
    echo '================'
done

# One can't be found in Docker Hub. Change to a download warehouse.
docker pull coredns/coredns:1.3.1
docker tag  coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker rmi  coredns/coredns:1.3.1

10. Common commands of kubeadm

# Start a Kubernetes master node
[root@k8s-master deploy]# kubeadm init
# Start a Kubernetes worknode and add it to the cluster
[root@k8s-master deploy]# kubeadm join
# Update a Kubernetes cluster to a new version
[root@k8s-master deploy]# kubeadm upgrade
# If you initialize the cluster with v1.7.x or lower versions of kubeadm, you need to configure the cluster to use the kubeadm upgrade command
[root@k8s-master deploy]# kubeadm config
# Managing tokens used by kubeadm join
[root@k8s-master deploy]# kubeadm token
# Rebuild link Token
[root@k8s-master deploy]# kubeadm token create --print-join-command
# View the list of Token s that have not expired
[root@k8s-master deploy]# kubeadm token list
# Restore any changes made to the host by kubeadm init or kubeadm join
[root@k8s-master deploy]# kubeadm reset
# Query all pod s
[root@k8s-master deploy]# kubectl get pod -A -o wide
# Query all nodes
[root@k8s-master deploy]# kubectl get nodes -o wide
# View the k8s Problem Node Log
[root@k8s-master deploy]# journalctl -f -u kubelet
# View namespaces
[root@k8s-master deploy]# kubectl get namespace

11. Initialize master (normal)
Kubeadm init -- pod-network-cidr= <pod network IP address/subnet mask> -- kubernetes-version= <k8s version>

kubeadm init  --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.14.2
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.26.48.4:6443 --token yx9yza.rcb08m1giup70y63 \
    --discovery-token-ca-cert-hash sha256:f6548aa3508014ac5dab129231b54f5085f37fe8e6fc5d362f787be70a1a8a6e
[root@k8s-master deploy]#

11.1. Errors in initializing master

[root@alimaster k8s]# kubeadm init  --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.14.2
[init] Using Kubernetes version: v1.14.2
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.0-beta5. Latest validated version: 18.09
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

This is because the cup core is not enough, if the test can be ignored

kubeadm init  --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.14.2 --ignore-preflight-errors=NumCPU

12. After the initialization Master is completed, we use the command kubectl get node to view the cluster node information, but you will find that there is no Node information, but the error is as follows:

[root@k8s-master deploy]# kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?

13. The reason for this is that the log prompt step in init was not executed.

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

[root@k8s-master deploy]# kubectl get node
NAME         STATUS     ROLES    AGE    VERSION
k8s-master   NotReady   master   9m8s   v1.14.2
[root@k8s-master deploy]#
[root@k8s-master deploy]# kubectl get pod -A
NAMESPACE     NAME                      READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-8xbcf   0/1     Pending   0          10s
kube-system   coredns-fb8b8dccf-ztxxg   0/1     Pending   0          10s
kube-system   kube-proxy-kcvph          1/1     Running   0          9s
[root@k8s-master deploy]#

14. Installing pod network add-ons
kubernetes provides a variety of network component options, including Calia, Canal, Flannel, Kube-router, Romana, Weave Net can be used, specific use can refer to (3/4) installation of pod network to operate, here we choose Flannel as the network component.
Note: In order for Flannel to work properly, you need to add the pod-network-cidr=10.244.0.0/16 parameter to execute the kubeadm init command. Flannel works on amd64, arm, arm64 and ppc64le, but you have to manually download and replace AMD64 with other platforms except amd64.

# View the distribution information of the current system
[root@k8s-master deploy]# lsb_release -a
LSB Version:    :core-4.1-amd64:core-4.1-noarch
Distributor ID: CentOS
Description:    CentOS Linux release 7.5.1804 (Core)
Release:        7.5.1804
Codename:       Core
[root@k8s-master deploy]#
[root@k8s-master deploy]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
[root@k8s-master deploy]#
[root@k8s-master deploy]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s-master deploy]#
# You need to wait a little while. Running is the only way to check the status.
[root@k8s-master deploy]# kubectl get pod -A
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-8xbcf              1/1     Running   0          2m8s
kube-system   coredns-fb8b8dccf-ztxxg              1/1     Running   0          2m8s
kube-system   etcd-k8s-master                      1/1     Running   0          81s
kube-system   kube-apiserver-k8s-master            1/1     Running   0          81s
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          74s
kube-system   kube-flannel-ds-amd64-hk4wt          1/1     Running   0          51s
kube-system   kube-proxy-kcvph                     1/1     Running   0          2m7s
kube-system   kube-scheduler-k8s-master            1/1     Running   0          69s
[root@k8s-master deploy]#

Keywords: Kubernetes Docker network yum

Added by Kathy on Sun, 08 Sep 2019 15:35:59 +0300