Kubernetes workload controller

Workload Controller

Workload Controllers are an abstract concept of K8s for deploying and managing Pod s at higher levels of objects.

Common workload controllers:

  • Deployment: Stateless application deployment
  • StatefulSet: Stateful application deployment
  • DaemonSet: Make sure all Node s run the same Pod
  • Job: One-time task
  • Cronjob: Timed tasks

Controller's role:

  • Manage Pod Objects
  • Use labels to associate with Pod
  • Controller implements Pod operations such as rolling updates, scaling, replica management, maintenance of Pod status, etc.

Deployment

function

  • Manage Pod and ReplicaSet
  • Functions such as online deployment, copy setup, rolling upgrade, rollback, etc.
  • Provide declarative updates, such as updating only a new Image

Scenarios: Web Site, API, Micro Service

Deployment Mirror

[root@master ~]# cat test.yaml 
---
apiVersion: apps/v1
kind: Deployment
metadata:
   name: busybox
   namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: busybox
  template: 
    metadata: 
      labels:
        app: busybox
    spec: 
      containers:
      - name: b1					
        image: busybox
        command: ["/bin/sh","-c","sleep 9000"]
        
[root@master ~]# kubectl apply -f test.yaml 
deployment.apps/busybox created
[root@master ~]# kubectl get pod
NAME                   READY   STATUS              RESTARTS   AGE
busybox-864fd4fb54-4hz2h   1/1     Running             0          57s
busybox-864fd4fb54-j9ht4   1/1     Running             0          57s
busybox-864fd4fb54-w485p   1/1     Running             0          57s

Rolling Upgrade

Rolling Upgrade: The default policy of K8s for Pod upgrade, which allows downtime publishing by gradually updating older versions of Pod with new versions of Pod, is user-insensitive.

Rolling Upgrade Implementation in K8s:

  • 1 Deployment
  • 2 ReplicaSet s

Rolling Update Policy:

spec: 
  replicas: 3
  revisionHistoryLimit: 10 			# Number of saved historical versions of RS
  selector:
    matchLabels:
      app: web
  strategy:
    rollingUpdate: 
      maxSurge: 25%
      maxUnavaliable: 25%
    type: RollingUpdate
// Locally mirrored
[root@master ~]# docker images
REPOSITORY                    TAG       IMAGE ID       CREATED        SIZE
harry1004/httpd              v0.2      f61bbd041ba4   11 days ago    89.2MB


// Create four httpd containers. All names are web
[root@master ~]# cat test2.yaml 
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: web
  namespace: default
spec: 
  replicas: 4
  selector: 
    matchLabels: 
      app: httpd
  template: 
    metadata: 
      labels: 
        app: httpd
    spec: 
      containers:
        - name: httpd
          image: harry1004/httpd:v0.2
          imagePullPolicy: IfNotPresent
          
[root@master ~]# kubectl apply -f test2.yaml 
deployment.apps/web created

[root@master ~]# kubectl get pods
web-5d688b9745-7gx4g   1/1     Running             0          100s
web-5d688b9745-9hnxz   1/1     Running             0          100s
web-5d688b9745-ft6w9   1/1     Running             0          100s
web-5d688b9745-vmcbv   1/1     Running             0          100s

// Write test2 again. Yaml file, set maxSurge, maxUnavaliable
[root@master ~]# cat test2.yaml 
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: web
  namespace: default
spec: 
  replicas: 4
  strategy: 					// strategy
    rollingUpdate:				// Scroll Update
      maxSurge: 25%				// Maximum can exceed 25% 	 	# These two parameters can be used together or separately, depending on your needs
      maxUnavailable:  25%	   // Maximum unavailable 25%
    type: RollingUpdate
  selector: 
    matchLabels: 
      app: httpd
  template: 
    metadata: 
      labels: 
        app: httpd
    spec: 
      containers:
        - name: httpd
          image: harry1004/httpd:v0.2
          imagePullPolicy: IfNotPresent

// application
[root@master ~]# kubectl apply -f test2.yaml 
deployment.apps/web configured

// View found no change yet
[root@master ~]# kubectl get pods
NAME                   READY   STATUS      RESTARTS   AGE
web-5d688b9745-7gx4g   1/1     Running     0          10m
web-5d688b9745-9hnxz   1/1     Running     0          10m
web-5d688b9745-ft6w9   1/1     Running     0          10m
web-5d688b9745-vmcbv   1/1     Running     0          10m

// Let's go to test2. Modify the mirror inside yaml
#The rest remains unchanged
[root@master ~]# vim test2.yaml 
image: httpd		// Mirror changes update

// Apply after modification
[root@master ~]# kubectl apply -f test2.yaml 
deployment.apps/web configured

// Discovered that three old webs were stopped, two new and one old webs were started, and two new webs were being created
[root@master ~]# kubectl get pods
NAME                   READY   STATUS             	 RESTARTS   	AGE
web-5d688b9745-7gx4g   1/1     Terminating     		 0          	10m
web-5d688b9745-9hnxz   0/1     Running         	     0          	22m
web-5d688b9745-ft6w9   0/1     Terminating        	 0          	22m
web-5d688b9745-vmcbv   0/1     Terminating        	 0          	22m
web-f8bcfc88-bcvcd     0/1     Running 	 			 0          	92s
web-f8bcfc88-kkx4f     0/1     Running  	         0          	92s
web-f8bcfc88-w4dxx     1/1     ContainerCreating   	 0          	65s
web-f8bcfc88-x6q5z     1/1     ContainerCreating   	 0          	47s

// Last four new web pages
[root@master ~]# kubectl get pods
NAME                   READY   STATUS      RESTARTS   AGE
web-f8bcfc88-bcvcd     1/1     Running     0          92s
web-f8bcfc88-kkx4f     1/1     Running     0          92s
web-f8bcfc88-w4dxx     1/1     Running     0          65s
web-f8bcfc88-x6q5z     1/1     Running     0          47s

Horizontal expansion

  • Modify the replicas value in yanl and apply
  • kubectl scale deployment web --replicas=10\

Note: The replicas parameter controls the number of Pod copies

[root@master ~]# kubectl get pods
NAME                   READY   STATUS      RESTARTS   AGE
web-f8bcfc88-bcvcd     1/1     Running     0          92s
web-f8bcfc88-kkx4f     1/1     Running     0          92s
web-f8bcfc88-w4dxx     1/1     Running     0          65s
web-f8bcfc88-x6q5z     1/1     Running     0          47s


// Create 10 containers
[root@master ~]# cat test2.yaml 
---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: web
  namespace: default
spec: 
  replicas: 10						// Copy into 10
  strategy: 
    rollingUpdate:
      maxSurge: 55%					// Maximum can exceed 55%
      maxUnavailable: 50%			// Maximum unavailable 50%
    type: RollingUpdate
  selector: 
    matchLabels: 
      app: httpd
  template: 
    metadata: 
      labels: 
        app: httpd
    spec: 
      containers:
        - name: httpd
          image: harry1004/httpd:v0.2
          imagePullPolicy: IfNotPresent
          
// application          
[root@master ~]# kubectl apply -f test2.yaml 
deployment.apps/web created

// Looking at it, 10 containers have been created and they are all running
[root@master ~]# kubectl get pods
NAME                   READY   STATUS      RESTARTS   AGE
web-5d688b9745-6dqcl   1/1     Running     0          63s
web-5d688b9745-bkmbf   1/1     Running     0          63s
web-5d688b9745-cpxkx   1/1     Running     0          63s
web-5d688b9745-gxjf6   1/1     Running     0          63s
web-5d688b9745-k2l2b   1/1     Running     0          63s
web-5d688b9745-ll5cc   1/1     Running     0          63s
web-5d688b9745-sqckx   1/1     Running     0          63s
web-5d688b9745-t578c   1/1     Running     0          63s
web-5d688b9745-txwh9   1/1     Running     0          63s
web-5d688b9745-z25kc   1/1     Running     0          63s

// Modify replicas value to achieve horizontal scaling
#The rest remains unchanged
[root@master ~]# vim test2.yaml 
replicas: 3

[root@master ~]# kubectl get pods
NAME                   READY   STATUS              			  RESTARTS   AGE
web-5d688b9745-6dqcl   1/1     Running     					  0          63s
web-5d688b9745-bkmbf   1/1     Running     					  0          63s
web-5d688b9745-cpxkx   1/1     Terminating     				  0          63s
web-5d688b9745-gxjf6   1/1     Terminating        			  0          63s
web-5d688b9745-k2l2b   1/1     Terminating   				  0          63s
web-5d688b9745-ll5cc   1/1     Terminating    				  0          63s
web-5d688b9745-sqckx   1/1     Terminating    				  0          63s
web-5d688b9745-t578c   1/1     Terminating     				  0          63s
web-5d688b9745-txwh9   1/1     Terminating    				  0          63s
web-5d688b9745-z25kc   1/1     Terminating    				  0          63s
web-f8bcfc88-hb8dz     0/1     ContainerCreating              0          5s
web-f8bcfc88-ldqt5     0/1     ContainerCreating              0          5s
web-f8bcfc88-r8zn7     0/1     ContainerCreating              0          5s

// There are only three left in the end result
[root@master ~]# kubectl get pods
NAME                   READY   STATUS      RESTARTS   AGE
web-f8bcfc88-hb8dz     1/1     Running     0          3m44s
web-f8bcfc88-ldqt5     1/1     Running     0          3m44s
web-f8bcfc88-r8zn7     1/1     Running     0          3m44s

RollBACK

// View Version
[root@master ~]# kubectl rollout history deploy/web
deployment.apps/web 
REVISION  CHANGE-CAUSE
1         <none>				// There are two versions
2         <none>

// If you roll back the current version, you will skip
[root@master ~]# kubectl rollout undo deploy/web --to-revision 2
deployment.apps/web skipped rollback (current template already matches revision 2)

// Rollback first version
[root@master ~]# kubectl rollout undo deploy/web --to-revision 1
deployment.apps/web rolled back

// Rollback succeeded
[root@master ~]# kubectl rollout history deploy/web
deployment.apps/web 
REVISION  CHANGE-CAUSE
2         <none>
3         <none>			

revisionHistoryLimit: 6/Multiple versions can be retained
[root@master ~]# vim test2.yaml
revisionHistoryLimit: 6		// Keep six versions

Note: Rollback is the state at which a deployment is redeployed, with all configurations of the version at that time

ReplicaSet

ReplicaSet Controller Purpose:

  • Pod Replica Quantity Management, continuously comparing current and expected Pod quantities
  • Deployment creates an RS as a record each time it publishes for rollback
# View RS records
[root@master ~]# kubectl get rs
NAME             DESIRED   CURRENT   READY   AGE
web-5d688b9745   3         3         3       7m5s
web-f8bcfc88     0         0         0       6m33s

# Version Corresponds to RS Record
[root@master ~]# kubectl rollout history deployment web
deployment.apps/web 
REVISION  CHANGE-CAUSE
2         <none>
3         <none>

[root@master ~]# kubectl get rs
NAME             DESIRED   CURRENT   READY   AGE
web-5d688b9745   3         3         3       7m5s
web-f8bcfc88     0         0         0       6m33s

// Modify Mirror
#The rest remains unchanged
[root@master ~]# vim test2.yaml 
image: httpd

# application
[root@master ~]# kubectl apply -f test2.yaml 
deployment.apps/web configured

[root@master ~]# kubectl get pods
web-5d688b9745-dpmsd   1/1     Terminating         0          11m
web-5d688b9745-q6dls   1/1     Terminating         0          11m
web-f8bcfc88-4rkfx     0/1     ContainerCreating   0          2s
web-f8bcfc88-6knsw     1/1     Running             0          2s
web-f8bcfc88-bd9zz     1/1     Running             0          2s

# Each version corresponds to an RS, the previous version has been destroyed, and the new version will have a new Rs
[root@master ~]# kubectl get rs
NAME             DESIRED   CURRENT   READY   AGE
web-5d688b9745   0         0         0       12m
web-f8bcfc88     3         3         3       12m

ReplicaSet: Controlled by the parameters of relicas in Deployment, we don't need to define this controller separately!

DameonSet

Functions:

  • Run a Pod on each Node
  • New odes also automatically run a Pod
# Delete resource, container will also be deleted
[root@master ~]# kubectl delete -f test2.yaml 
deployment.apps "web" deleted

[root@master ~]# cat daemon.yaml
---
apiVersion: apps/v1
kind: DaemonSet					// Type is DaemonSet
metadata:
  name: filebeat
  namespace: kube-system		// Namespace: Set to Inside System
spec:
  selector:
    matchLabels:
      name: filebeat
  template:
    metadata:
      labels:
        name: filebeat
    spec:
      containers:				// Log Mirror
      - name: log
        image: elastic/filebeat:7.16.2
        imagePullPolicy: IfNotPresent

[root@master ~]# kubectl get pods -n kube-system
NAME                             READY   STATUS              RESTARTS         AGE
coredns-6d8c4cb4d-9m5jg          1/1     Running             11 (4h36m ago)   6d4h
coredns-6d8c4cb4d-mp662          1/1     Running             11 (4h36m ago)   6d4h
etcd-master                      1/1     Running             13 (4h36m ago)   6d4h
kube-apiserver-master            1/1     Running             13 (4h36m ago)   6d4h
kube-controller-manager-master   1/1     Running             14 (4h36m ago)   6d4h
kube-flannel-ds-g9jsh            1/1     Running             11 (4h36m ago)   6d1h
kube-flannel-ds-qztxc            1/1     Running             11 (4h36m ago)   6d1h
kube-flannel-ds-t8lts            1/1     Running             13 (4h36m ago)   6d1h
kube-proxy-q2jmh                 1/1     Running             12 (4h36m ago)   6d4h
kube-proxy-r28dn                 1/1     Running             13 (4h36m ago)   6d4h
kube-proxy-x4cns                 1/1     Running             12 (4h36m ago)   6d4h
kube-scheduler-master            1/1     Running             14 (4h36m ago)   6d4h

        
[root@master ~]# kubectl apply -f daemon.yaml 
deployment.apps/filebeat created

# This is system related. Half of the average user will not use it and will not affect it.
So when you need to run containers on each node, you can use DameonSet Perform: Public Tasks

[root@master ~]# kubectl get pods -n kube-system
NAME                             READY   STATUS              RESTARTS         AGE
coredns-6d8c4cb4d-9m5jg          1/1     Running             11 (4h36m ago)   6d4h
coredns-6d8c4cb4d-mp662          1/1     Running             11 (4h36m ago)   6d4h
etcd-master                      1/1     Running             13 (4h36m ago)   6d4h
filebeat-9ck6z                   1/1     Running   			 0                68s
filebeat-d2psf                   1/1     Running   			 0                68s
kube-apiserver-master            1/1     Running             13 (4h36m ago)   6d4h
kube-controller-manager-master   1/1     Running             14 (4h36m ago)   6d4h
kube-flannel-ds-g9jsh            1/1     Running             11 (4h36m ago)   6d1h
kube-flannel-ds-qztxc            1/1     Running             11 (4h36m ago)   6d1h
kube-flannel-ds-t8lts            1/1     Running             13 (4h36m ago)   6d1h
kube-proxy-q2jmh                 1/1     Running             12 (4h36m ago)   6d4h
kube-proxy-r28dn                 1/1     Running             13 (4h36m ago)   6d4h
kube-proxy-x4cns                 1/1     Running             12 (4h36m ago)   6d4h
kube-scheduler-master            1/1     Running             14 (4h36m ago)   6d4h

# So nodes can run, master is not scheduled by default except master
[root@master ~]# kubectl get pods -n kube-system -o wide
NAME                             READY   STATUS    RESTARTS         AGE    IP               NODE                NOMINATED NODE   READINESS GATES
filebeat-9ck6z                   1/1     Running   0                3m9s   10.244.2.219     node2.example.com   <none>           <none>
filebeat-d2psf                   1/1     Running   0                3m9s   10.244.1.141     node1.example.com   <none>           <none>

Note: Resources are created successfully, files can be deleted, resources run as resources, resources will not change with the file changes, you want to change when you write a file

Job and ronJob

Job is divided into regular tasks (Job) and fixed-time tasks (CronJob)

  • One-time Execution
// Write a job.yaml file
[root@master ~]# cat job.yaml 
apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template: 
    spec: 
      containers:
      - name: pi
        image: perl
        command: ["perl","-Mbignum=bpi","-wle","print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4

// Use job.yaml file
[root@master ~]# kubectl apply -f job.yaml
job.batch/pi created

// View pod details
 stay node2 Run on Node
[root@master ~]# kubectl get pods -o wide
NAME       READY   STATUS              RESTARTS   AGE     IP       NODE                NOMINATED NODE   READINESS GATES
pi-27xrt   0/1     ContainerCreating   0          9m20s   <none>   node2.example.com   <none>           <none>

// Filter on Node 2
[root@node2 ~]# docker ps | grep pi
e55ac8842c89   registry.aliyuncs.com/google_containers/pause:3.6   "/pause"                 9 minutes ago   Up 9 minutes             k8s_POD_pi-27xrt_default_698b4c91-ef54-4fe9-b62b-e0abc00031fd_0

// It's done
[root@master ~]# kubectl get pods
NAME       READY   STATUS              RESTARTS   AGE
pi-27xrt   0/1     Completed           0          22m

// View on Node 2
[root@node2 ~]#  docker images | grep perl
perl                                                 latest    f9596eddf06f   3 days ago     890MB

// Successfully created
[root@master ~]# kubectl describe job/pi
Name:             pi
Namespace:        default
Selector:         controller-uid=5058b8e2-fc49-4247-9b5b-6f2b5df6dc67
Labels:           controller-uid=5058b8e2-fc49-4247-9b5b-6f2b5df6dc67
                  job-name=pi
Annotations:      batch.kubernetes.io/job-tracking: 
Parallelism:      1
Completions:      1
Completion Mode:  NonIndexed
Start Time:       Fri, 24 Dec 2021 11:43:53 +0800
Completed At:     Fri, 24 Dec 2021 12:03:59 +0800
Duration:         20m
Pods Statuses:    0 Active / 1 Succeeded / 0 Failed		// Look here
......slightly

CronJob is used to accomplish timed tasks, like Crontab on Linux

  • Timed Tasks
[root@master ~]# kubelet --version
Kubernetes v1.23.1

// Write a cronjob.yaml file
[root@master ~]# cat cronjob.yaml 
---
apiVersion: batch/v1		# V1. Version 23.1 Write this here
kind: CronJob
metadata:
  name: busybox
spec:
  schedule: "*/1****"
  jobTemplate: 
    spec: 
      template: 
        spec: 
          containers:
          - name: busybox
            image: busybox
            args:
            - /bin/sh
            - -c
            - date;echo Hello aliang
          restartPolicy: OnFailure

[root@master ~]# date
Fri Dec 24 16:43:37 CST 2021

// Run Scheduled Tasks
[root@master ~]# kubectl apply -f cronjob.yaml 
Warning: batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob

[root@master ~]# kubectl get pods
NAME                   READY   STATUS      RESTARTS   AGE
busybox-27338645-xddbs   0/1     Completed   0          2m54s
busybox-27338646-54jvb   0/1     Completed   0          114s
busybox-27338647-ntnhq   0/1     Completed   0          54s

// Start running
[root@master ~]# kubectl get pods
NAME                   READY   STATUS              RESTARTS   AGE
busybox-27338645-xddbs   0/1     Completed           0          3m16s
busybox-27338646-54jvb   0/1     Completed           0          2m16s
busybox-27338647-ntnhq   0/1     Completed           0          76s
busybox-27338648-hlh49   0/1     ContainerCreating   0          16s

// Run Complete
[root@master ~]# kubectl get pods
NAME                   READY   STATUS      RESTARTS   AGE
busybox-27338646-54jvb   0/1     Completed   0          2m54s
busybox-27338647-ntnhq   0/1     Completed   0          114s
busybox-27338648-hlh49   0/1     Completed   0          54s


// Created successfully
[root@master ~]# kubectl describe cronjob/busybox
Events:
  Type    Reason            Age                     From                Message
  ----    ------            ----                    ----                -------
  Normal  SuccessfulCreate  3m27s (x133 over 135m)  cronjob-controller  (combined from similar events): Created job hello-27338944

Keywords: Operation & Maintenance Docker Kubernetes

Added by gchouchou on Sat, 25 Dec 2021 21:48:24 +0200