1. Why k8s v1.16.0?
The latest version of v1.16.2 has been tried and can't be installed. After installing to kubeadm init, a lot of errors have been reported, such as node xxx not found ation. centos7 has been reinstalled several times, or can't be solved. It took a day to install and almost gave up.
Later, the installation tutorials found on the web were basically v1.16.0. I don't believe it was a pit of v1.16.2, so I didn't plan to downgrade to v1.16.0 before.
I tried to install v1.16.0 when I couldn't. It was recorded here to avoid the pits later on.
In this article, the major steps for installation are as follows:
-
Install docker-ce 18.09.9 (all machines)
-
Set k8s environment preconditions (all machines)
-
Install k8s v1.16.0 master management node
-
Install k8s v1.16.0 node working node
-
Install flannel (master)
Here's an important step, remember the ip for communication between your master and node, such as 192.168.99.104 for my master and 192.168.99.105 for node.Make sure that you can ping each other on master and node using these two ips, which are needed to configure k8s next.
My environment:
-
Operating system: win10
-
Virtual machine: virtual box
-
linux Release: CentOS7
-
linux Kernel (viewed using uname-r): 3.10.0-957.el7.x86_64
-
ip(master) for master and node communication: 192.168.99.104
2. Install docker-ce 18.09.9 (all machines)
All machines that install k8s require the docker to be installed with the following commands:
#Tools required to install docker yum install -y yum-utils device-mapper-persistent-data lvm2 #Configure the docker source for Ali Cloud yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #Specify docker-ce to install this version yum install -y docker-ce-18.09.9-3.el7 #Start docker systemctl enable docker && systemctl start docker
3. Set up k8s environment preparation conditions (all machines)
Installing k8s requires more than 2 CPU s and 2g of memory, which is simple enough to configure in a virtual machine. Then execute the following script to do some preparatory work. All machines installing k8s need this step.
#Close the firewall systemctl disable firewalld systemctl stop firewalld #Close selinux #Temporarily disable selinux setenforce 0 #Permanently Close Modify/etc/sysconfig/selinux File Settings sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config #Disable swap partitions swapoff -a #Permanently disabled, open/etc/fstab comment out the swap line. sed -i 's/.*swap.*/#&/' /etc/fstab #Modify Kernel Parameters cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
4. Install k8s v1.16.0 master management node
If docker has not been installed, refer to Step 2 of this article to install docker-ce 18.09.9 (all machines).
If the k8s environment preparation conditions are not set, refer to Step 3 of this article to set the k8s environment preparation conditions (all machines) for execution.
After the above two steps have been checked, continue with the following steps.
Install kubeadm, kubelet, kubectl
Since the official k8s source is in google, it is not accessible in China, so Ali yum source is used here
#Execute Configuration k8s Ali Cloud Source cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF #Install kubeadm, kubectl, kubelet yum install -y kubectl-1.16.0-0 kubeadm-1.16.0-0 kubelet-1.16.0-0 #Start the kubelet service systemctl enable kubelet && systemctl start kubelet
Initialize k8s
The following command starts installing the docker image that k8s needs because foreign websites cannot be accessed, so this command uses the source of the Ali cloud at home (registry.aliyuncs.com/google_containers).
Another very important thing is that--apiserver-advertise-address here uses an ip that can ping each other between master and node. I'm 192.168.99.104 here, and you've just been stuck here for a night. Please modify your ip to execute.
This command will be stuck in [preflight] You can also perform this action in beforehand using''kubeadm config images pull. It will take about 2 minutes. Please be patient.
#Download the six docker images used in the management node, which you can see using docker images #This will take about two minutes to wait and get stuck in [preflight] You can also perform this action in beforehand using''kubeadm'config images pull kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.16.0 --apiserver-advertise-address 192.168.99.104 --pod-network-cidr=10.244.0.0/16 --token-ttl 0
After the above installation, you will be prompted to enter the following command, copy and paste it, and then execute it.
#When the above installation is complete, k8s will prompt you to enter the following command to execute mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Remember node's command to join the cluster
The above kubeadm init executes successfully and returns the command to your node to join the cluster. It will execute on the node and need to be saved. If you forget, you can use the following command to get it.
kubeadm token create --print-join-command
Above, installation of master node is complete. You can use kubectl get nodes to check that master is in NotReady state and not in use for now.
5. Install k8s v1.16.0 node working node
If docker has not been installed, refer to Step 2 of this article to install docker-ce 18.09.9 (all machines).
If the k8s environment preparation conditions are not set, refer to Step 3 of this article to set the k8s environment preparation conditions (all machines) for execution.
After the above two steps have been checked, continue with the following steps.
Install kubeadm, kubelet
#Execute Configuration k8s Ali Cloud Source cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF #Install kubeadm, kubectl, kubelet yum install -y kubeadm-1.16.0-0 kubelet-1.16.0-0 #Start the kubelet service systemctl enable kubelet && systemctl start kubelet
Join Cluster
The command to join the cluster here is different for everyone. You can log in to the master node and use kubeadm token create --print-join-command to get it. After getting it, execute as follows.
#Join the cluster. If you don't know the command to join the cluster here, you can log in to the master node and use kubeadm token create --print-join-command to get it. kubeadm join 192.168.99.104:6443 --token ncfrid.7ap0xiseuf97gikl \ --discovery-token-ca-cert-hash sha256:47783e9851a1a517647f1986225f104e81dbfd8fb256ae55ef6d68ce9334c6a2
After successful joining, you can use the kubectl get nodes command on the master node to see the joined node.
6. Install flannel (master machine)
After the above steps are installed, the machine is set up, but the state is NotReady. As shown below, the master machine needs to install flanneld.
Download official fannel profile
Use the wget command at: (https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml), This address is not accessible domestically, so I copied the content, so to avoid the previous article being too long, I pasted it into the eighth step appendix at the end of the article.
This YML profile is configured with an address that is not accessible in China (quay.io), and I have changed it to an address that is accessible in China (quay-mirror.qiniu.com). We will create a new kube-flannel.yml file and copy and paste the content.
Install fannel
kubectl apply -f kube-flannel.yml
7. Great success
At this point, the k8s cluster has been set up, as shown below, the node is in Ready state, and it is finished with spraying flowers.
8. Appendix
This is the contents of the kube-flannel.yml file, which has changed all inaccessible addresses (quay.io) to domestically accessible addresses (quay-mirror.qiniu.com). We will create a new kube-flannel.yml file and copy and paste the contents.
--- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - amd64 hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - arm64 hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - arm hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-ppc64le namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - ppc64le hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-ppc64le command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-ppc64le command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-s390x namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - s390x hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-s390x command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-s390x command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg