1, Explain
This article describes how to upgrade the Kubernetes cluster created with kubedm from version 1.13 X upgrade to version 1.14 x.
You can only upgrade from one MINOR version to the next, or upgrade between PATCH versions of the same MINOR. In other words, MINOR version cannot be skipped during upgrade. For example, you can start from 1 Y upgrade to 1 Y + 1, but not from 1 Y upgrade to 1 y + 2.
The upgrade workflow is as follows:
Upgrade the master node. Upgrade other master nodes. Upgrade the work node. Current kubernetes version information:
Copy after login
[root@node-01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION node-01 Ready master 99d v1.13.3 node-02 Ready master 99d v1.13.3 node-03 Ready master 99d v1.13.3 node-04 Ready <none> 99d v1.13.3 node-05 Ready <none> 99d v1.13.3 node-06 Ready <none> 99d v1.13.3
2, Upgrade master section
1. On the first master node, upgrade kubedm
Copy after login
yum install kubeadm-1.14.1 -y
2. Verify that the download is valid and has the expected version:
Copy after login
kubeadm version
3. Run kubedm upgrade plan
This command checks whether your cluster can be upgraded and gets the version that can be upgraded to
, you should see output similar to this:
Copy after login
[root@node-01 ~]# kubeadm upgrade plan [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.13.3 [upgrade/versions] kubeadm version: v1.14.1 I0505 13:55:58.449783 12871 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get https://dl.k8s.io/release/stable.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I0505 13:55:58.449867 12871 version.go:97] falling back to the local client version: v1.14.1 [upgrade/versions] Latest stable version: v1.14.1 I0505 13:56:08.645796 12871 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.13.txt": Get https://dl.k8s.io/release/stable-1.13.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I0505 13:56:08.645861 12871 version.go:97] falling back to the local client version: v1.14.1 [upgrade/versions] Latest version in the v1.13 series: v1.14.1 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 6 x v1.13.3 v1.14.1 Upgrade to the latest version in the v1.13 series: COMPONENT CURRENT AVAILABLE API Server v1.13.3 v1.14.1 Controller Manager v1.13.3 v1.14.1 Scheduler v1.13.3 v1.14.1 Kube Proxy v1.13.3 v1.14.1 CoreDNS 1.2.6 1.3.1 Etcd 3.2.24 3.3.10 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.14.1 _____________________________________________________________________
4. Run the upgrade command
Copy after login
kubeadm upgrade apply v1.14.1
You should see output similar to this
Copy after login
[root@node-01 ~]# kubeadm upgrade apply v1.14.1 [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade/version] You have chosen to change the cluster version to "v1.14.1" [upgrade/versions] Cluster version: v1.13.3 [upgrade/versions] kubeadm version: v1.14.1 [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] [upgrade/prepull] Prepulling image for component etcd. [upgrade/prepull] Prepulling image for component kube-apiserver. [upgrade/prepull] Prepulling image for component kube-scheduler. [upgrade/prepull] Prepulling image for component kube-controller-manager. [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [upgrade/prepull] Prepulled image for component kube-scheduler. [upgrade/prepull] Prepulled image for component kube-apiserver. [upgrade/prepull] Prepulled image for component etcd. [upgrade/prepull] Prepulled image for component kube-controller-manager. [upgrade/prepull] Successfully prepulled the images for all the control plane components [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.14.1"... Static pod: kube-apiserver-node-01 hash: 26d86add2bfd0fd6825f5507fff1fb5e Static pod: kube-controller-manager-node-01 hash: 21ea3d3ccb8d8dc00056209ca3da698b Static pod: kube-scheduler-node-01 hash: a8d928943d47ec793a700ef95c4b6b4a [upgrade/etcd] Upgrading to TLS for etcd Static pod: etcd-node-01 hash: 35015766b0b1714f398bed77aae5be95 [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-00-02/etcd.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: etcd-node-01 hash: 35015766b0b1714f398bed77aae5be95 Static pod: etcd-node-01 hash: 35015766b0b1714f398bed77aae5be95 Static pod: etcd-node-01 hash: 17ddbcfb2ddf1d447ceec2b52c9faa96 [apiclient] Found 3 Pods for label selector component=etcd [upgrade/staticpods] Component "etcd" upgraded successfully! [upgrade/etcd] Waiting for etcd to become available [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests940835611" [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-00-02/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-apiserver-node-01 hash: 26d86add2bfd0fd6825f5507fff1fb5e Static pod: kube-apiserver-node-01 hash: ff2267bcddb83b815efb49ff766ad897 [apiclient] Found 3 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-00-02/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-controller-manager-node-01 hash: 21ea3d3ccb8d8dc00056209ca3da698b Static pod: kube-controller-manager-node-01 hash: ff8be061048a4660a1fbbf72db229d0d [apiclient] Found 3 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-00-02/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-scheduler-node-01 hash: a8d928943d47ec793a700ef95c4b6b4a Static pod: kube-scheduler-node-01 hash: 959a5cdf1468825401daa8d35329351e [apiclient] Found 3 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.14.1". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
5. Manually upgrade your CNI provider plug-in.
Your CNI provider may have its own upgrade instructions. Check the plug-in page to find your CNI provider and see if additional upgrade steps are required.
6. Upgrade kubelet and kubectl on the first master node
Copy after login
yum install kubectl-1.14.1 kebulet-1.14.1 -y
Restart kubelet
Copy after login
systemctl daemon-reload
systemctl restart kubelet
3, Upgrade other master nodes
1. Upgrade kubedm program
Copy after login
yum install kubeadm-1.14.1 -y
2. Upgrade static pod
Copy after login
kubeadm upgrade node experimental-control-plane
You can see similar information:
Copy after login
[root@node-02 ~]# kubeadm upgrade node experimental-control-plane [upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.14.1"... Static pod: kube-apiserver-node-02 hash: 26d86add2bfd0fd6825f5507fff1fb5e Static pod: kube-controller-manager-node-02 hash: 21ea3d3ccb8d8dc00056209ca3da698b Static pod: kube-scheduler-node-02 hash: a8d928943d47ec793a700ef95c4b6b4a [upgrade/etcd] Upgrading to TLS for etcd Static pod: etcd-node-02 hash: 1aa55e50528bd0621c5734515b3265fd [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-10-46/etcd.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: etcd-node-02 hash: 1aa55e50528bd0621c5734515b3265fd Static pod: etcd-node-02 hash: 1aa55e50528bd0621c5734515b3265fd Static pod: etcd-node-02 hash: 4710a34897e7838519a1bf8fe4dccf07 [apiclient] Found 3 Pods for label selector component=etcd [upgrade/staticpods] Component "etcd" upgraded successfully! [upgrade/etcd] Waiting for etcd to become available [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests483113569" [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-10-46/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-apiserver-node-02 hash: 26d86add2bfd0fd6825f5507fff1fb5e Static pod: kube-apiserver-node-02 hash: fe1005f40c3f390280358921c3073223 [apiclient] Found 3 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-10-46/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-controller-manager-node-02 hash: 21ea3d3ccb8d8dc00056209ca3da698b Static pod: kube-controller-manager-node-02 hash: ff8be061048a4660a1fbbf72db229d0d [apiclient] Found 3 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-10-46/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-scheduler-node-02 hash: a8d928943d47ec793a700ef95c4b6b4a Static pod: kube-scheduler-node-02 hash: 959a5cdf1468825401daa8d35329351e [apiclient] Found 3 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upgrade] The control plane instance for this node was successfully updated!
3. Update kubelet and kubectl
Copy after login
yum install kubectl-1.14.1 kebulet-1.14.1 -y
4. Restart kubelet
Copy after login
systemctl daemon-reload
systemctl restart kubelet
4, Upgrade work node
1. Upgrade kubedm on the work node:
Copy after login
yum install -y kubeadm-1.14.1
2. Adjust the scheduling strategy
Prepare the node for maintenance by marking it unscheduled and evicting it from the pod (this step is performed on the master).
Copy after login
kubectl drain $NODE --ignore-daemonsets
Copy after login
[root@node-01 ~]# kubectl drain node-04 --ignore-daemonsets node/node-04 already cordoned WARNING: ignoring DaemonSet-managed Pods: cattle-system/cattle-node-agent-h555m, default/glusterfs-vhdqv, kube-system/canal-mbwvf, kube-system/kube-flannel-ds-amd64-zdfn8, kube-system/kube-proxy-5d64l evicting pod "coredns-55696d4b79-kfcrh" evicting pod "cattle-cluster-agent-66bd75c65f-k7p6n" pod/cattle-cluster-agent-66bd75c65f-k7p6n evicted pod/coredns-55696d4b79-kfcrh evicted node/node-04 evicted
3. Update node
Copy after login [root@node-04 ~]# kubeadm upgrade node config --kubelet-version v1.14.1 [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
4. Update kubelet and kubectl
Copy after login
yum install kubectl-1.14.1 kebulet-1.14.1 -y
5. Restart kubelet
Copy after login
systemctl daemon-reload
systemctl restart kubelet
6. Resume scheduling policy (master execution)
Copy after login
kubectl uncordon $NODE
Other work nodes can be upgraded according to the above steps.
4, Verify the status of the cluster
After upgrading kubelet on all nodes, verify that all nodes are available by running the following command from anywhere kubectl can access the cluster:
Copy after login
[root@node-01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION node-01 Ready master 99d v1.14.1 node-02 Ready master 99d v1.14.1 node-03 Ready master 99d v1.14.1 node-04 Ready <none> 99d v1.14.1 node-05 Ready <none> 99d v1.14.1 node-06 Ready <none> 99d v1.14.1
All nodes in this STATUS column should display Ready and the version number should be updated.
5, Working principle
Kubedm upgrade apply:
Check whether your cluster is in an upgradeable state:
The API server is accessible
All nodes are in this Ready state
Is the master healthy
Implement version control policies.
Ensure that the master image is available or available for pulling to the machine.
Upgrade master components or roll back (if any of them cannot be started).
Apply the new Kube DNS and Kube proxy lists and ensure that all necessary RBAC rules are created.
Create a new certificate and key file for the API server. If the old file is about to expire in 180 days, back up the old file.
Kubedm upgrade node experimental control plane performs the following operations on other control plane nodes:
ClusterConfiguration gets kubeadm from the cluster.
You can choose to back up the Kube apiserver certificate.
Upgrade the static Pod list of the master component.
That's all for this sharing. If you have any questions, please leave a message below. I hope you can pay more attention and praise. Thank you!