Kubedm rapid deployment of kubernetes cluster

1, Three deployment modes officially provided by kubernetes

  • minikube
    Minikube is a tool that can quickly run a single point of Kubernetes locally. It is only used by users who try Kubernetes or daily development.
    Deployment address: https://kubernetes.io/docs/setup/minikube/

  • kubeadm
    Kubedm is also a tool that provides kubedm init and kubedm join for rapid deployment of Kubernetes clusters.
    Deployment address: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

  • Binary package
    It is recommended to download the binary package of the distribution from the official and manually deploy each component to form a Kubernetes cluster.
    Download address: https://github.com/kubernetes/kubernetes/releases

2, Kubedm rapid deployment K8S cluster

2.1 kubedm deployment environment preparation

The following operations are performed on all three nodes

2.1. 1 environmental role

Environment: centos 7.4+

IProleInstall software
192.168.56.102masterkube-apiserver kube-schduler kube-controller-manager docker flannel kubelet
192.168.56.103node1kubelet kube-proxy docker flannel
192.168.56.104node2kubelet kube-proxy docker flannel

Note: master hardware requirements: CPU > = 2C

2.1. 2 initialize the machine environment
1,Turn off firewall and selinux
# systemctl stop firewalld && systemctl disable firewalld

2,close swap partition
# swapoff -a # temporary
# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab #permanent

3,close selinux
# setenforce 0
# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

4,to configure hosts
# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.56.102 master
192.168.56.103 node1
192.168.56.104 node2

5,Modify each host name
# hostnamectl set-hostname master
# hostnamectl set-hostname node1
# hostnamectl set-hostname node2

2.1. 3. Docker installation
1.download docker source
# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

2.install docker
# yum -y install docker

3.set up docker service
# systemctl enable docker && systemctl start docker

4.View version
# docker --version
Docker version 1.13.1, build 7d71120/1.13.1
2.1. 4. Install kubedm, kubelet and kubectl
1.add to kubernetes YUM Software source
# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2.install kubeadm,kubelet and kubectl
# yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0
# systemctl enable kubelet

2.2 deployment of Kubernetes Master

It only needs to be executed in the Master node

2.2. 1. Set Alibaba docker image acceleration
# cat /etc/docker/daemon.json
{
 "registry-mirrors":["https://6kx4zyno.mirror.aliyuncs.com"]
}
2.2. 2. Pull the master related image

The apiserve here needs to be modified to its own master address

kubeadm init \
--apiserver-advertise-address=192.168.56.102  \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.15.0 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16

If the above operation is wrong, the above operation can be carried out through reinitialization
# kubeadm reset

Output results:

[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kuberne                               tes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.56.102]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.56.102 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.56.102 127.0.0.1 ::1]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/ma                               nifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.504274 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in th                               e cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedu                               le]
[bootstrap-token] Using token: ei1jt9.yspq53ljxove1e7g
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term cer                               tificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstra                               p Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.56.102:6443 --token ei1jt9.yspq53ljxove1e7g \
    --discovery-token-ca-cert-hash sha256:1cb76c6105563a913bb290aaac1d0f94a68229011237a8dc8e416da31193509e

Note: kubedm join... In the last line above is the method of joining a node. It can be executed on the node for 24 hours. If it exceeds the time limit, it is necessary to obtain the token value again

2.2. 3. Configure the master environment

According to 2.2 2 prompt configuration

# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config

2.3 deploy kubernetes node

Register Node to Matser
Format: kubedm join: 6443 -- token < token value > -- discovery token CA cert hash sha256: < Sha value >

2.3. 1. Within 24h of master deployment

The contents of kubedm join. Above, kubedm init has been generated

stay node1,node2 Execute the following command
# kubeadm join 192.168.56.102:6443 --token ei1jt9.yspq53ljxove1e7g \
    --discovery-token-ca-cert-hash sha256:1cb76c6105563a913bb290aaac1d0f94a68229011237a8dc8e416da31193509e

Output results:

[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
2.3. 1. 24h after master deployment

If the initial token exceeds 24h, it will become invalid and need to be regenerated

1.see token (The following is valid, there are 20 more h (expired)
#  kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
ei1jt9.yspq53ljxove1e7g   20h       2021-06-09T23:47:53-04:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token.

2.Regenerate token
# kubeadm token create
0w3a92.ijgba9ia0e3scicg

3.obtain ca certificate sha256 code hash value
# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
1cb76c6105563a913bb290aaac1d0f94a68229011237a8dc8e416da31193509e

4.Nodes join the cluster
# kubeadm join 192.168.56.102:6443  --token aa78f6.8b4cafc8ed26c34f --discovery-token-ca-cert-hash sha256:0fd95a9bc67a7bf0ef42da968a0d55d92e

2.4 installing network plug-ins

It only needs to be executed in the Master node

2.4. 1 download configuration file
# wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

The default is failure, Kube flannel YML contents are as follows

# cat kube-flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        # image: quay.io/coreos/flannel:v0.11.0-amd64
        image: lizhenliang/flannel:v0.11.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        #image: quay.io/coreos/flannel:v0.11.0-amd64
        image: lizhenliang/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
2.4.2 flannel generation
# kubectl apply -f kube-flannel.yml

Check the node status of the cluster. After installing the network tool, the following operations can be continued only after the following status is displayed and all nodes are Ready

# kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   3h18m   v1.15.0
node1    Ready    <none>   178m    v1.15.0
node2    Ready    <none>   178m    v1.15.0

# kubectl get pod -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-bccdc95cf-6cs28          1/1     Running   0          3h18m
coredns-bccdc95cf-jztbl          1/1     Running   0          3h18m
etcd-master                      1/1     Running   0          3h18m
kube-apiserver-master            1/1     Running   0          3h18m
kube-controller-manager-master   1/1     Running   0          3h17m
kube-flannel-ds-amd64-49w26      1/1     Running   0          152m
kube-flannel-ds-amd64-vtcfw      1/1     Running   0          152m
kube-flannel-ds-amd64-xzg4s      1/1     Running   0          152m
kube-proxy-bz2b7                 1/1     Running   0          179m
kube-proxy-hznlg                 1/1     Running   0          3h18m
kube-proxy-pp9tk                 1/1     Running   0          179m
kube-scheduler-master            1/1     Running   0          3h17m

If all of them are 1 / 1, the following steps can be performed successfully. If flannel needs to check the network, repeat the following operations
kubectl delete -f kube-flannel.yml
Then re wget, modify the image address, and
kubectl apply -f kube-flannel.yml

2.5 testing Kubernetes clusters

Create a pod in the Kubernetes cluster, expose the port, and verify whether it is accessed normally:

# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed

# kubectl get pods,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-554b9c67f9-dz8g7   1/1     Running   0          152m

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.1.0.1     <none>        443/TCP        3h21m
service/nginx        NodePort    10.1.54.55   <none>        80:32641/TCP   152m

# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE    IP           NODE    NOMINATED NODE   READINESS GATES
nginx-554b9c67f9-dz8g7   1/1     Running   0          153m   10.244.1.3   node2   <none>           <none>

You can see on node2
Access address: http://NodeIP:Port , this example is: http://192.168.56.104:32641 (successfully accessed nginx)

Keywords: Operation & Maintenance Docker Kubernetes kubeadm

Added by HhAaZzEeYy on Tue, 14 Dec 2021 23:37:25 +0200