I. Introduction
Kubernetes is a hot technology at present. It has become the standard of PASS management platform in the open source community. At present, most of the articles build kubernetes platform for X86 platform. Next, the author builds open-source kubernetes platform on Linux one.
There are two main ways to build K8S platform,
- The first one is based on binary architecture, which can deepen the understanding of K8S services step by step.
- kubeadm, the official recommended automatic deployment tool
This time, the official method of building Kubeadm is used. kubedm uses K8S's own services to K8S's own pod s. In addition, the basic services in advance are run in the way of system services.
master node installation components:
docker, kubelet and kubeadm run based on local system services
Kube proxy is a dynamic pod that can be managed by k8s
API server, Kube controller, etcd and to guan in pod
node component
docker and kubelet run based on local system service
Kube proxy is a dynamic pod that can be managed by k8s
flannel is a dynamic pod that can be managed by k8sTwo, installation
1. environment
The installed environment can use either virtual machine or Lpar. This is the virtual machine under the Openstack environment I used. The specification of virtual machine is 4C10G50G
System version | IP address | host name | K8s version |
---|---|---|---|
Red Hat Enterprise Linux Server release 7.4 | 172.16.35.141 | rhel7-master | 1.17.4 |
Red Hat Enterprise Linux Server release 7.4 | 172.16.35.138 | rhel7-node-1 | 1.17.4 |
Environmental preparation
setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config swapoff -a && sysctl -w vm.swappiness=0 && sysctl -w net.ipv4.ip_forward=1 cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system [root@rhel7-master ~]# yum install rng-tools -y [root@rhel7-master ~]# systemctl enable rngd;systemctl start rngd
2. Install docker package
[root@rhel7-master tmp]# wget ftp://ftp.unicamp.br/pub/linuxpatch/s390x/redhat/rhel7.3/docker-17.05.0-ce-rhel7.3-20170523.tar.gz [root@rhel7-master tmp]# tar -xvf docker-17.05.0-ce-rhel7.3-20170523.tar.gz [root@rhel7-master tmp]#cd docker-17.05.0-ce-rhel7.3-20170523 [root@rhel7-master tmp]# cp docker* /usr/local/bin [root@rhel7-master docker-1.11.2-rhel7.2-20160623]# cat > /etc/systemd/system/docker.service << EOF [Unit] Description=docker [Service] User=root #ExecStart=/usr/bin/docker daemon -s overlay ExecStart=/usr/local/bin/dockerd EnvironmentFile=-/etc/sysconfig/docker [Install] WantedBy=multi-user.target EOF [root@rhel7-master docker-1.11.2-rhel7.2-20160623]# cat > /etc/sysconfig/docker <<EOF OPTIONS="daemon -s overlay" EOF [root@rhel7-master docker-1.11.2-rhel7.2-20160623]# //Startup service systemctl daemon-reload systemctl restart docker [root@rhel7-master tmp]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [root@rhel7-master tmp]#
3. Install kubelet, kubeadm, etc
Add yum source (both master and node nodes need to execute)
[root@rhel7-master ~]# cat > /etc/yum.repos.d/os.repo <<EOF [k8s] name=k8s baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-s390x/ gpgcheck=0 enabled=1 [clefos1] name=cle1 baseurl=http://download.sinenomine.net/clefos/7.6/ gpgcheck=0 enabled=1 EOF [root@rhel7-master ~]#
View the version of the package provided by the current repo
yum list kubelet kubeadm kubectl --showduplicates|sort -r
Install the 1.174 package as follows
[root@rhel-master ~]# yum install kubeadm-1.17.4 kubelet-1.17.4 kubectl-1.17.4 -y [root@rhel7-master ~]# systemctl enable --now kubelet
4. Initializing environment with kubeadm
Before we do that, we need to make the following preparations
Initialize the environment (both the master and node nodes need to execute):
1. Host name based communication
2. Time synchronization
3. Firewall OFF
4,swapoff -a && sysctl -w vm.swappiness=0 && sysctl -w net.ipv4.ip_forward=1 \
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables \
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
Check the basic docker image needed by kubeadm
[root@rhel7-master ~]# kubeadm config images list k8s.gcr.io/kube-apiserver:v1.17.4 k8s.gcr.io/kube-controller-manager:v1.17.4 k8s.gcr.io/kube-scheduler:v1.17.4 k8s.gcr.io/kube-proxy:v1.17.4 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5
We can see that we have listed the docker image s we need. Due to the indescribable factors, we cannot directly access k8s.gcr.io
So we need to download these images by ourselves. I have uploaded the images to my docker hub. You can pull them by yourself
docker pull erickshi/kube-apiserver-s390x:v1.17.4 docker pull erickshi/kube-scheduler-s390x:v1.17.4 docker pull erickshi/kube-controller-manager-s390x:v1.17.4 docker pull erickshi/pause-s390x:3.1 docker pull erickshi/coredns:s390x-1.6.5 docker pull erickshi/etcd:3.4.3-0 docker pull erickshi/pause:3.1
After downloading, we need to change it to the same name as we listed, because kubeadm will download it
docker tag erickshi/kube-apiserver-s390x:v1.17.4 k8s.gcr.io/kube-apiserver:v1.17.4 docker tag erickshi/kube-scheduler-s390x:v1.17.4 k8s.gcr.io/kube-scheduler:v1.17.4 docker tag erickshi/kube-controller-manager-s390x:v1.17.4 k8s.gcr.io/kube-controller-manager:v1.17.4 docker tag erickshi/pause-s390x:3.1 k8s.gcr.io/pause:3.1 docker tag erickshi/etcd-s390x:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0 docker tags erickshi/coredns:s390x-1.6.5 k8s.gcr.io/coredns:1.6.5 docker tag erickshi/coredns:s390x-1.6.5 k8s.gcr.io/coredns:1.6.5
In addition, we need the image of flannal
docker pull erickshi/flannel:v0.12.0-s390x docker tag erickshi/flannel:v0.12.0-s390x k8s.gcr.io/flannel:v0.12.0-s390x
Or download on Baidu cloud disk and directly import docker load
Links: https://pan.baidu.com/s/1E5YLM8LhPvdo1mlSsNPVdg Password: vfis
Initialize the environment
[root@rhel-master ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-[root@rhel-master ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 W0327 02:32:15.413161 6817 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: dial tcp: lookup dl.k8s.io on [::1]:53: read udp [::1]:32972->[::1]:53: read: connection refused W0327 02:32:15.414720 6817 version.go:102] falling back to the local client version: v1.17.4 W0327 02:32:15.414805 6817 validation.go:28] Cannot validate kube-proxy config - no validator is available W0327 02:32:15.414811 6817 validation.go:28] Cannot validate kubelet config - no validator is available [init] Using Kubernetes version: v1.17.4 [preflight] Running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.05.0-ce. Latest validated version: 19.03 [WARNING Hostname]: hostname "rhel-master.novalocal" could not be reached [WARNING Hostname]: hostname "rhel-master.novalocal": lookup rhel-master.novalocal on [::1]:53: read udp [::1]:33466->[::1]:53: read: connection refused [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [rhel-master.novalocal kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.35.141] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [rhel-master.novalocal localhost] and IPs [172.16.35.141 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [rhel-master.novalocal localhost] and IPs [172.16.35.141 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0327 02:32:22.545271 6817 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0327 02:32:22.546219 6817 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 12.501743 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node rhel-master.novalocal as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node rhel-master.novalocal as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: ckun8n.l8adw68yhpcsdmdu [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.16.35.141:6443 --token ckun8n.l8adw68yhpcsdmdu \ --discovery-token-ca-cert-hash sha256:ea5a0282fa2582d6b10d4ea29a9b76318d1f023109248172e0820531ac1bef5e [root@rhel-master ~]#
Copy the authentication file and view the current pod and node
[root@rhel-master ~]# kubectl get node,pod --all-namespaces NAME STATUS ROLES AGE VERSION node/rhel-master.novalocal NotReady master 2m59s v1.17.4 NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/coredns-6955765f44-cbzss 0/1 Pending 0 2m41s kube-system pod/coredns-6955765f44-dxjfb 0/1 Pending 0 2m41s kube-system pod/etcd-rhel-master.novalocal 1/1 Running 0 2m56s kube-system pod/kube-apiserver-rhel-master.novalocal 1/1 Running 0 2m56s kube-system pod/kube-controller-manager-rhel-master.novalocal 1/1 Running 0 2m56s kube-system pod/kube-proxy-6nmhq 1/1 Running 0 2m41s kube-system pod/kube-scheduler-rhel-master.novalocal 1/1 Running 0 2m56s [root@rhel-master ~]#
Installing flannel
[root@rhel-master ~]# kubectl apply -f https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml
View network and nodes again
[root@rhel-master ~]# kubectl get node,pod --all-namespaces NAME STATUS ROLES AGE VERSION node/rhel-master.novalocal Ready master 9m21s v1.17.4 NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/coredns-6955765f44-cbzss 1/1 Running 0 9m3s kube-system pod/coredns-6955765f44-dxjfb 1/1 Running 0 9m3s kube-system pod/etcd-rhel-master.novalocal 1/1 Running 0 9m18s kube-system pod/kube-apiserver-rhel-master.novalocal 1/1 Running 0 9m18s kube-system pod/kube-controller-manager-rhel-master.novalocal 1/1 Running 0 9m18s kube-system pod/kube-flannel-ds-s390x-zv6xl 1/1 Running 0 5m54s kube-system pod/kube-proxy-6nmhq 1/1 Running 0 9m3s kube-system pod/kube-scheduler-rhel-master.novalocal 1/1 Running 0 9m18s [root@rhel-master ~]#
You can see that the node has become ready and the pod of Kube system is ready. At present, the installation of the master node is a part of the process. Install the node node below
Install node node
Preconditions:
See environmental preparation
Join the master node directly below
kubeadm join 172.16.35.141:6443 --token ckun8n.l8adw68yhpcsdmdu \ --discovery-token-ca-cert-hash sha256:ea5a0282fa2582d6b10d4ea29a9b76318d1f023109248172e0820531ac1bef5e W0330 03:22:57.551161 2546 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.05.0-ce. Latest validated version: 19.03 [WARNING Hostname]: hostname "rhel-node-1.novalocal" could not be reached [WARNING Hostname]: hostname "rhel-node-1.novalocal": lookup rhel-node-1.novalocal on [::1]:53: read udp [::1]:48786->[::1]:53: read: connection refused [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. [root@rhel-node-1 ~]#
Now look at the node node again
[root@rhel-master ~]# kubectl get node NAME STATUS ROLES AGE VERSION rhel-master.novalocal Ready master 8m47s v1.17.4 rhel-node-1.novalocal NotReady <none> 8m7s v1.17.4
Since the node-1 network does not have ready, let's look at the pod status
[root@rhel-master ~]# kubectl get pod --all-namespaces -owide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-6955765f44-9j5mc 1/1 Running 0 9m14s 10.244.0.3 rhel-master.novalocal <none> <none> kube-system coredns-6955765f44-sjsjs 1/1 Running 0 9m14s 10.244.0.2 rhel-master.novalocal <none> <none> kube-system etcd-rhel-master.novalocal 1/1 Running 0 9m31s 172.16.35.141 rhel-master.novalocal <none> <none> kube-system kube-apiserver-rhel-master.novalocal 1/1 Running 0 9m31s 172.16.35.141 rhel-master.novalocal <none> <none> kube-system kube-controller-manager-rhel-master.novalocal 1/1 Running 0 9m31s 172.16.35.141 rhel-master.novalocal <none> <none> kube-system kube-flannel-ds-s390x-ftz9h 1/1 Running 0 8m19s 172.16.35.141 rhel-master.novalocal <none> <none> kube-system kube-flannel-ds-s390x-nl5q4 0/1 Init:ErrImagePull 0 6m37s 172.16.35.138 rhel-node-1.novalocal <none> <none> kube-system kube-proxy-5vtcq 1/1 Running 0 9m14s 172.16.35.141 rhel-master.novalocal <none> <none> kube-system kube-proxy-6qfc6 1/1 Running 0 8m54s 172.16.35.138 rhel-node-1.novalocal <none> <none> kube-system kube-scheduler-rhel-master.novalocal 1/1 Running 0 9m31s 172.16.35.141 rhel-master.novalocal <none> <none> [root@rhel-master ~]#
You can see that the flannal pod of rhel-node-1 has not been ready. Because of the problem of image, I import image manually
[root@rhel-node-1 ~]# docker load < flannelv0.12.0-s390x.tar 1f106b41b4d6: Loading layer 5.916MB/5.916MB 271ca11ef489: Loading layer 3.651MB/3.651MB fbd88a276dca: Loading layer 10.77MB/10.77MB 3b7ae8a9c323: Loading layer 2.332MB/2.332MB 4c4bfa1b47e6: Loading layer 35.23MB/35.23MB b67de7789e55: Loading layer 5.12kB/5.12kB Loaded image: quay.io/coreos/flannel:v0.12.0-s390x [root@rhel-node-1 ~]#
View pod status again
[root@rhel-master ~]# kubectl get pod --all-namespaces -owide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-6955765f44-9j5mc 1/1 Running 0 10m 10.244.0.3 rhel-master.novalocal <none> <none> kube-system coredns-6955765f44-sjsjs 1/1 Running 0 10m 10.244.0.2 rhel-master.novalocal <none> <none> kube-system etcd-rhel-master.novalocal 1/1 Running 0 11m 172.16.35.141 rhel-master.novalocal <none> <none> kube-system kube-apiserver-rhel-master.novalocal 1/1 Running 0 11m 172.16.35.141 rhel-master.novalocal <none> <none> kube-system kube-controller-manager-rhel-master.novalocal 1/1 Running 0 11m 172.16.35.141 rhel-master.novalocal <none> <none> kube-system kube-flannel-ds-s390x-ftz9h 1/1 Running 0 9m53s 172.16.35.141 rhel-master.novalocal <none> <none> kube-system kube-flannel-ds-s390x-nl5q4 1/1 Running 0 8m11s 172.16.35.138 rhel-node-1.novalocal <none> <none> kube-system kube-proxy-5vtcq 1/1 Running 0 10m 172.16.35.141 rhel-master.novalocal <none> <none> kube-system kube-proxy-6qfc6 1/1 Running 0 10m 172.16.35.138 rhel-node-1.novalocal <none> <none> kube-system kube-scheduler-rhel-master.novalocal 1/1 Running 0 11m 172.16.35.141 rhel-master.novalocal <none> <none> [root@rhel-master ~]#
The core components of k8s have been deployed!