Deploying kubernetes production cluster based on kubekey

kubeykey is a Kubernetes cluster deployment program developed by KubeSphere based on Go language. With KubeKey, you can easily, efficiently and flexibly install Kubernetes and KubeSphere individually or as a whole.

KubeKey can be used in three cases.

  • Install Kubernetes only
  • Install Kubernetes and KubeSphere with one command
  • First install Kubernetes, and then deploy KubeSphere on it using KS installer

Important: Kubekey will help you install Kubernetes. If you already have Kubernetes clusters, please refer to Install KubeSphere on Kubernetes.

advantage

  • Ansible based installers have a number of software dependencies, such as Python.
  • KubeKey uses Go Language development can eliminate problems in various environments, so as to improve the success rate of installation. KubeKey uses kubedm to install K8s in parallel on the node as much as possible Cluster to reduce installation complexity and improve efficiency. Compared with earlier installers, it will greatly save installation time. KubeKey supports clustering from all in one Extend to multi node clusters or even HA clusters. KubeKey aims to operate the cluster as an object, that is, CaaO.

reference resources: https://github.com/kubesphere/kubekey

kubekey deploying kubernetes cluster

Download kk deployment tools

wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz
tar -zxvf kubekey-v1.0.0-linux-amd64.tar.gz
mv kk /usr/local/bin/

Deploy all in one single node

Installation dependency (optional)

yum install -y socat conntrack

Deploy kubernetes single node only

kk create cluster

Kubernetes and kubesphere are deployed at the same time. You can specify the version of kubernetes or kubesphere

kk create cluster --with-kubernetes v1.18.6 --with-kubesphere v3.0.0

Deploying multi node clusters

Prepare the environment and confirm that the node time is synchronized. There is no need to configure the host name. kubekey will automatically correct the host name.

yum install -y chrony
systemctl enable --now chronyd
timedatectl set-timezone Asia/Shanghai

Create sample profile

kk create config

Create a sample configuration file and deploy kubernetes and kubesphere at the same time. You can specify the version, configuration file name and save path

kk create config --with-kubernetes v1.18.6 --with-kubesphere v3.0.0 -f /root/config-sample.yaml

Modify the configuration file config - sample. According to your environment Yaml, the following example takes the deployment of three master nodes and one node node as an example (instead of kubesphere deployment, only kubernetes clusters are built):

# vim config-sample.yaml
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: node1, address: 192.168.1.110, internalAddress: 192.168.1.110, user: root, password: 123456}
  - {name: node2, address: 192.168.1.111, internalAddress: 192.168.1.111, user: root, password: 123456}
  - {name: node3, address: 192.168.1.112, internalAddress: 192.168.1.112, user: root, password: 123456}
  - {name: node4, address: 192.168.1.113, internalAddress: 192.168.1.113, user: root, password: 123456}
  roleGroups:
    etcd:
    - node1
    - node2
    - node3
    master: 
    - node1
    - node2
    - node3
    worker:
    - node4
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: ""
    port: "6443"
  kubernetes:
    version: v1.17.9
    imageRepo: kubesphere
    clusterName: cluster.local
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
  registry:
    registryMirrors: []
    insecureRegistries: []
  addons: []

When KubeSphere is specified to be installed, persistent storage is required to be available in the cluster. localVolume is used by default. If you need to use other persistent storage, see addons to configure.

Create a cluster using a configuration file.

kk create cluster -f config-sample.yam

View the cluster status when finished

[root@node1 ~]# kubectl get nodes -o wide
NAME    STATUS   ROLES    AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
node1   Ready    master   3m55s   v1.17.9   192.168.1.110   <none>        CentOS Linux 7 (Core)   3.10.0-1127.18.2.el7.x86_64   docker://19.3.8
node2   Ready    master   3m20s   v1.17.9   192.168.1.111   <none>        CentOS Linux 7 (Core)   3.10.0-1127.18.2.el7.x86_64   docker://19.3.8
node3   Ready    master   3m13s   v1.17.9   192.168.1.112   <none>        CentOS Linux 7 (Core)   3.10.0-1127.18.2.el7.x86_64   docker://19.3.8
node4   Ready    worker   3m23s   v1.17.9   192.168.1.113   <none>        CentOS Linux 7 (Core)   3.10.0-1127.18.2.el7.x86_64   docker://19.3.8
[root@node1 ~]# 
[root@node1 ~]# 
[root@node1 ~]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-59d85c5c84-mzfz9   1/1     Running   0          3m31s
kube-system   calico-node-j5gx7                          1/1     Running   0          3m26s
kube-system   calico-node-n4vwj                          1/1     Running   0          3m31s
kube-system   calico-node-tdgrv                          1/1     Running   0          3m24s
kube-system   calico-node-zc27d                          1/1     Running   0          3m17s
kube-system   coredns-74d59cc5c6-kspd4                   1/1     Running   0          3m41s
kube-system   coredns-74d59cc5c6-ll7kt                   1/1     Running   0          3m41s
kube-system   kube-apiserver-node1                       1/1     Running   0          3m53s
kube-system   kube-apiserver-node2                       1/1     Running   0          3m22s
kube-system   kube-apiserver-node3                       1/1     Running   0          2m12s
kube-system   kube-controller-manager-node1              1/1     Running   0          3m53s
kube-system   kube-controller-manager-node2              1/1     Running   0          3m21s
kube-system   kube-controller-manager-node3              1/1     Running   0          115s
kube-system   kube-proxy-hk6c5                           1/1     Running   0          3m41s
kube-system   kube-proxy-k5d8w                           1/1     Running   0          3m17s
kube-system   kube-proxy-lv6wk                           1/1     Running   0          3m27s
kube-system   kube-proxy-pqdgb                           1/1     Running   0          3m24s
kube-system   kube-scheduler-node1                       1/1     Running   0          3m53s
kube-system   kube-scheduler-node2                       1/1     Running   0          3m21s
kube-system   kube-scheduler-node3                       1/1     Running   0          116s
kube-system   nodelocaldns-6kq4c                         1/1     Running   0          3m27s
kube-system   nodelocaldns-7xbc9                         1/1     Running   0          3m24s
kube-system   nodelocaldns-9r7v4                         1/1     Running   0          3m41s
kube-system   nodelocaldns-rkv2d                         1/1     Running   0          3m17s

Enable multi cluster management

By default, Kubekey will only install a single cluster in Solo mode, that is, Kubernetes multi cluster Federation is not enabled. If you want to use KubeSphere as a central panel that supports centralized management of multiple clusters, you need to config-example.yaml Set ClusterRole in. Refer to the documentation for using multiple clusters How to enable multi cluster.

Turn on pluggable functional components

KubeSphere from 2.1 Version 0 starts to understand the coupling of various functional components of the Installer. By default, the quick installation will only enable the minimum installation. The Installer supports the installation of user-defined pluggable functional components before or after installation. It makes the minimum installation faster, lighter and takes less resources. It is also convenient for different users to choose to install different functional components on demand.

KubeSphere has several pluggable functional components. Please refer to the introduction of functional components Configuration example . You can select to enable and install the pluggable functional components of KubeSphere according to your needs. We highly recommend that you open these functional components to experience the complete functions and end-to-end solutions of KubeSphere. Please ensure that your machine has sufficient CPU and memory resources before installation. Open pluggable functional components for reference Turn on optional functional components.

Add node

Add the information for the new node to the cluster configuration file and apply the changes.

kk add nodes -f config-sample.yaml

Delete node

Delete the node through the following command. nodename refers to the name of the node to be deleted.

kk delete node <nodeName> -f config-sample.yaml

Delete cluster

If you start with all in one:

kk delete cluster

If you start with an advanced installation (cluster created using a configuration file):

kk delete cluster [-f config-sample.yaml]

Cluster upgrade

For a single node cluster, upgrade the single cluster to the specified version.

kk upgrade [--with-kubernetes version] [--with-kubesphere version]
  • --With kubernetes specifies the target version of kubernetes.
  • --With kubesphere specifies the target version of kubesphere.

Multi node cluster. Upgrade the cluster by specifying the configuration file.

kk upgrade [--with-kubernetes version] [--with-kubesphere version] [(-f | --file) path]
  • --With kubernetes specifies the target version of kubernetes.
  • --With kubesphere specifies the target version of kubesphere.
  • -f specifies the configuration file created during cluster installation.

Note: to upgrade a multi node cluster, you need to specify a configuration file If the cluster is not created by kubekey, or the configuration file generated when creating the cluster is missing, you need to regenerate the configuration file, or use the following method.

Getting cluster info and generating kubekey's configuration file (optional).

kk create config [--from-cluster] [(-f | --file) path] [--kubeconfig path]
  • --From cluster generates a configuration file based on the existing cluster information
  • -f specifies the path to generate the configuration file
  • --Kubeconfig specifies the cluster kubeconfig file
  • Since the cluster configuration cannot be fully obtained, please complete the configuration file according to the actual information of the cluster after generating the configuration file

Next: Use easzup to quickly deploy a highly available cluster of kubernetes →

Added by wesley1189 on Thu, 23 Dec 2021 17:47:31 +0200