What is the workload controller
Workload Controllers is an abstract concept of K8s, which is used for higher-level objects, deployment and management of Pod
Common workload controllers:
- Deployment: stateless application deployment
- Stateful set: stateful application deployment
- Daemon set: ensure that all nodes run the same Pod
- Job: one time task
- Cronjob: scheduled task
Function of controller:
- Manage Pod objects
- Associate with Pod using tag
- The controller realizes the operation and maintenance of Pod, such as rolling update, scaling, replica management, maintaining Pod status, etc
Example:
Deployment introduction
In order to better solve the problem of service orchestration, k8s in V1 Since version 2, the deployment controller has been introduced. It is worth mentioning that this controller does not directly manage the pod,
Instead, it manages the pod indirectly by managing the replicaset, that is, deployment manages the replicaset and replicaset manages the pod. Therefore, deployment is more powerful than replicaset.
The main functions of deployment are as follows:
- All functions of replicaset are supported
- Support the stop and continue of publishing
- Support version rolling update and version fallback
Resource manifest file for deployment
apiVersion: apps/v1 #Version number kind: Deployment #type metadata: #metadata name: #rs name namespace: #Namespace labels: #label controller: deploy spec: #Detailed description replicas: #Number of copies revisionHistoryLimit: #Keep the historical version. The default is 10 paused: #Pause deployment. The default value is false progressDeadlineSeconds: #Deployment timeout (s). The default is 600 strategy: #strategy type: RollingUpdates #Rolling update strategy rollingUpdate: #Rolling update maxSurge: #The maximum number of copies that can exist, either as a percentage or as an integer maxUnavaliable: #The maximum value of the pod in the maximum unavailable state, which can be either a percentage or an integer selector: #Selector, which specifies which pod s the controller manages matchLabels: #Labels matching rule app: nginx-pod matchExpressions: #Expression matching rule - {key: app, operator: In, values: [nginx-pod]} template: #Template. When the number of copies is insufficient, a pod copy will be created according to the following template metadata: labels: app: nginx-pod spec: containers: - name: nginx image: nginx:1.17.1 ports: - containerPort: 80
[root@master ~]# cat test.yaml --- apiVersion: apps/v1 //api version information kind: Deployment //type metadata: name: test //Container name namespace: default //Use default namespace spec: replicas: 3 //Three container copies selector: matchLabels: app: busybox //Container label template: metadata: labels: app: busybox spec: containers: - name: b1 image: busybox //Mirror used command: ["/bin/sh","-c","sleep 9000"] [root@master ~]# kubectl apply -f test.yaml deployment.apps/amu created [root@master ~]# kubectl get pod NAME READY STATUS RESTARTS AGE test-864fd4fb54-4hz2h 1/1 Running 0 1ms test-864fd4fb54-j9ht4 1/1 Running 0 1m test-864fd4fb54-w485p 1/1 Running 0 1m
Deployment rolling upgrade
kubectl apply -f xxx.yaml
kubectl set image deployment/web nginx=nignx:1.16
kubectl edit deployment/web
Rolling upgrade: K8s's default strategy for Pod upgrade is to gradually update the old version of Pod by using the new version of Pod, so as to realize shutdown and release without user perception
// Rolling update strategy spec: replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: app: web strategy: rollingUpdate: maxSurge: 25% maxUnavaliable: 25% type: RollingUpdate
- maxSurge: the maximum number of Pod copies in the rolling update process to ensure that the number of pods started during the update is 25% more than the maximum number of expected (Replica) pods
- maxUnavailable: the maximum number of unavailable Pod copies during the rolling update process. Ensure that the maximum number of 25% pods is unavailable during the update, that is, ensure that the number of 75% pods is available
Example:
// Create four httpd containers [root@master ~]# cat test1.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: web namespace: default spec: replicas: 4 selector: matchLabels: app: httpd template: metadata: labels: app: web spec: containers: - name: httpd image: gaofan1225/httpd:v0.2 imagePullPolicy: IfNotPresent [root@master ~]# kubectl apply -f test1.yaml deployment.apps/web created [root@master ~]# kubectl get pods web-5c688b9779-7gi6t 1/1 Running 0 50s web-5c688b9779-9unfy 1/1 Running 0 50s web-5c688b9779-ft69k 1/1 Running 0 50s web-5c688b9779-vmlkg 1/1 Running 0 50s [root@master ~]# cat test1.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: web namespace: default spec: replicas: 4 strategy: rollingUpdate: // Rolling update maxSurge: 25% // Maximum 25% maxUnavailable: 25% // Up to 25% can not be used type: RollingUpdate selector: matchLabels: app: httpd template: metadata: labels: app: httpd spec: containers: - name: httpd image: dockerimages123/httpd:v0.2 imagePullPolicy: IfNotPresent [root@master ~]# kubectl apply -f test1.yaml deployment.apps/web configured // No change yet [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-5c697b9779-7gi6t 1/1 Running 0 8m web-5c697b9779-9unfy 1/1 Running 0 8m web-5c697b9779-ft69k 1/1 Running 0 8m web-5c697b9779-vmlkg 1/1 Running 0 8m // Change the image to httpd, image=httpd // Reapply [root@master ~]# kubectl apply -f test1.yaml deployment.apps/web configured // We found that three were deleted and two new and one old web were started [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-5d688b9779-7gi6t 1/1 Terminating 0 18m web-5d688b9779-9unfy 0/1 Running 0 20m web-5d688b9779-ft69k 0/1 Terminating 0 20m web-5d688b9779-vmlkg 0/1 Terminating 0 20m web-f8bcfc88-vddfk 0/1 Running 0 80s web-f8bcfc88-yur8y 0/1 Running 0 80s web-f8bcfc88-t9ryx 1/1 ContainerCreating 0 55s web-f8bcfc88-k07k 1/1 ContainerCreating 0 56s // Finally, it becomes 4 new [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-f8bcfc88vddfk 1/1 Running 0 80s web-f8bcfc88-yur8y 1/1 Running 0 80s web-f8bcfc88-t9ryx 1/1 Running 0 55s web-f8bcfc88-k07k 1/1 Running 0 56s
Deployment horizontal expansion
Modify the replicas value in yanl and then apply
kubectl scale deployment web --replicas=10
[root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-f8bcfc88vddfk 1/1 Running 0 80s web-f8bcfc88-yur8y 1/1 Running 0 80s web-f8bcfc88-t9ryx 1/1 Running 0 55s web-f8bcfc88-k07k 1/1 Running 0 56s // Create 10 containers [root@master ~]# cat test1.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: web namespace: default spec: replicas: 10 strategy: rollingUpdate: maxSurge: 55% maxUnavailable: 50% type: RollingUpdate selector: matchLabels: app: httpd template: metadata: labels: app: httpd spec: containers: - name: httpd image: dockerimages123/httpd:v0.2 imagePullPolicy: IfNotPresent [root@master ~]# kubectl apply -f test1.yaml deployment.apps/web created [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-5c688b9779-pb4x8 1/1 Running 0 50s web-5c688b9779-kf8vq 1/1 Running 0 50s web-5c688b9779-ki8s3 1/1 Running 0 50s web-5c688b9779-o9gx6 1/1 Running 0 50s web-5c688b9779-i8g4w 1/1 Running 0 50s web-5c688b9779-olgxt 1/1 Running 0 50s web-5c688b9779-khctw 1/1 Running 0 50s web-5c688b9779-ki8d6 1/1 Running 0 50s web-5c688b9779-i9g5s 1/1 Running 0 50s web-5c688b9779-jsj8k 1/1 Running 0 50s // Modify the replicas value to achieve horizontal expansion and contraction [root@master ~]# vim test1.yaml replicas: 3 [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-5c688b9779-pb4x8 1/1 Running 0 50s web-5c688b9779-kf8vq 1/1 Running 0 50s web-5c688b9779-ki8s3 1/1 Running 0 50s web-5c688b9779-o9gx6 1/1 Running 0 50s web-5c688b9779-i8g4w 1/1 Running 0 50s web-5c688b9779-olgxt 1/1 Running 0 50s web-5c688b9779-khctw 1/1 Running 0 50s web-5c688b9779-ki8d6 1/1 Running 0 50s web-5c688b9779-i9g5s 1/1 Running 0 50s web-5c688b9779-jsj8k 1/1 Running 0 50s web-i9olh676-jdkrg 0/1 ContainerCreating 0 8s web-i9olh676-opy5b 0/1 ContainerCreating 0 8s web-i9olh676-k8st4 0/1 ContainerCreating 0 8s [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-i9olh676-jdkrg 1/1 Running 0 2m19s web-i9olh676-opy5b 1/1 Running 0 2m19s web-i9olh676-k8st4 1/1 Running 0 2m19s
Deployment rollback
kubectl rollout history deployment/web / / view the release history version
Kubectl rollback undo deployment / Web / / rollback the previous version
Kubectl rollback undo deployment / Web -- to revision = 2 / / rollback history specifies the version
[root@master ~]# kubectl rollout history deploy/web deployment.apps/web REVISION CHANGE-CAUSE 1 <none> 2 <none> // If you roll back the current version, it will be skipped [root@master ~]# kubectl rollout undo deploy/web --to-revision 2 deployment.apps/web skipped rollback (current template already matches revision 2) // Roll back the first version [root@master ~]# kubectl rollout undo deploy/web --to-revision 1 deployment.apps/web rolled back // Rollback succeeded [root@master ~]# kubectl rollout history deploy/web deployment.apps/web REVISION CHANGE-CAUSE 2 <none> 3 <none>
Deployment delete
kubectl delete deploy/web
kubectl delete svc/web
kubectl delete pods/web
// establish [root@master ~]# kubectl create deployment nginx --image=nginx deployment.apps/nginx created [root@master ~]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE nginx 1/1 1 1 35s web 3/3 3 3 20m // Delete a pod [root@master ~]# kubectl delete deploy/nginx deployment.apps "nginx" deleted [root@master ~]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE web 3/3 3 3 20m // Delete all [root@master ~]# kubectl delete deployment --all deployment.apps "web" deleted [root@master ~]# kubectl get deployment No resources found in default namespace.
Deployment ReplicaSet
Manage the number of Pod replicas, and constantly compare the current number of pods with the expected number of pods
Every time the Deployment publishes, an RS will be created as a record for rollback
[root@master ~]# kubectl get rs NAME DESIRED CURRENT READY AGE web-5d7hy50s8a 3 3 3 6m4s web-f8bki8h5 0 0 0 6m25s kubectl rollout history deployment web // Version corresponds to RS record [root@master ~]# kubectl rollout history deployment web deployment.apps/web REVISION CHANGE-CAUSE 2 <none> 3 <none>
[root@master ~]# kubectl get rs NAME DESIRED CURRENT READY AGE web-5d7hy50s8a 3 3 3 6m4s web-f8bki8h5 0 0 0 6m25s // Change the image to httpd, and the others to httpd [root@master ~]# kubectl apply -f test2.yaml / / apply again deployment.apps/web configured [root@master ~]# kubectl get pods web-5d688b9745-dpmsd 1/1 Terminating 0 9m web-5d688b9745-q6dls 1/1 Terminating 0 9m web-i80gjk6t-ku6f4 0/1 ContainerCreating 0 5s web-i80gjk6t-9j5tu 1/1 Running 0 5s web-i80gjk6t-9ir4h 1/1 Running 0 5s [root@master ~]# kubectl get rs NAME DESIRED CURRENT READY AGE web-5d7hy50s8a 0 0 0 12m web-f8bki8h5 3 3 3 12m
ReplicaSet: controlled by the relicas parameter in Deployment
[root@node2 ~]# docker ps | grep web 4c938ouhc8j0 dabutse0c4fy "httpd-foreground" 13 seconds ago Up 12 seconds k8s_httpd_web-f8bcfc88-4rkfx_default_562616cd-1552-4610-bf98-e470225e4c31_1 452713eeccad registry.aliyuncs.com/google_containers/pause:3.6 "/pause" 5 minutes ago Up 5 minutes k8s_POD_web-f8bcfc88-4rkfx_default_562616cd-1552-4610-bf98-e470225e4c31_0 // Delete a [root@node2 ~]# docker kill 4c938ad0c01d 8htw4ad0cu8s [root@master ~]# kubectl get rs NAME DESIRED CURRENT READY AGE web-5d7hy50s8a 0 0 0 15m web-f8bki8h5 3 3 3 15m [root@master ~]# kubectl get rs NAME DESIRED CURRENT READY AGE web-5d7hy50s8a 0 0 0 16m web-f8bki8h5 3 3 3 18m [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE web-o96gb3sm-9ht4c 1/1 Running 2 (80s ago) 6m32s web-o96gb3sm-ki85s 1/1 Running 0 6m32s web-o96gb3sm-ku5sg 1/1 Running 0 6m32s
DameonSet
Run a Pod on each Node
The newly added Node will also automatically run a Pod
// If you delete a resource, the container will also be deleted [root@master ~]# kubectl delete -f test1.yml deployment.apps "web" deleted [root@master ~]# cat daemon.yml --- apiVersion: apps/v1 kind: DaemonSet // The type is DaemonSet metadata: name: filebeat namespace: kube-system spec: selector: matchLabels: name: filebeat template: metadata: labels: name: filebeat spec: containers: // Logged image - name: log image: elastic/filebeat:7.16.2 imagePullPolicy: IfNotPresent [root@master ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-6d8c4cb4d-9m5jg 1/1 Running 14 (4h36m ago) 6d4h coredns-6d8c4cb4d-mp662 1/1 Running 14 (4h36m ago) 6d4h etcd-master 1/1 Running 13 (3h30m ago) 6d4h kube-apiserver-master 1/1 Running 13 (3h30m ago) 6d4h kube-controller-manager-master 1/1 Running 14 (3h30m ago) 6d4h kube-flannel-ds-g9jsh 1/1 Running 9 (3h30m ago) 6d1h kube-flannel-ds-qztxc 1/1 Running 9 (3h30m ago) 6d1h kube-flannel-ds-t8lts 1/1 Running 13 (3h30m ago) 6d1h kube-proxy-q2jmh 1/1 Running 12 (3h30mago) 6d4h kube-proxy-r28dn 1/1 Running 13 (3h30m ago) 6d4h kube-proxy-x4cns 1/1 Running 13 (3h30m ago) 6d4h kube-scheduler-master 1/1 Running 15 (3h30m ago) 6d4h [root@master ~]# kubectl apply -f daemon.yaml deployment.apps/filebeat created [root@master ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-6d8c4cb4d-9m5jg 1/1 Running 14 (4h36m ago) 6d4h coredns-6d8c4cb4d-mp662 1/1 Running 14 (4h36m ago) 6d4h etcd-master 1/1 Running 13 (3h30m ago) 6d4h kube-apiserver-master 1/1 Running 13 (3h30m ago) 6d4h kube-controller-manager-master 1/1 Running 14 (3h30m ago) 6d4h kube-flannel-ds-g9jsh 1/1 Running 9 (3h30m ago) 6d1h kube-flannel-ds-qztxc 1/1 Running 9 (3h30m ago) 6d1h kube-flannel-ds-t8lts 1/1 Running 13 (3h30m ago) 6d1h kube-proxy-q2jmh 1/1 Running 12 (3h30mago) 6d4h kube-proxy-r28dn 1/1 Running 13 (3h30m ago) 6d4h kube-proxy-x4cns 1/1 Running 13 (3h30m ago) 6d4h kube-scheduler-master 1/1 Running 15 (3h30m ago) 6d4h filebeat-oiugt 1/1 Running 0 50s filebeat-9jhgt 1/1 Running 0 50s [root@master ~]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES filebeat-9ck6z 1/1 Running 0 3m9s 10.242.2.215 node2.example.com <none> <none> filebeat-d2psf 1/1 Running 0 3m9s 10.242.1.161 node1.example.com <none> <none>
Job and CronJob
Jobs are divided into ordinary tasks (Job) and timed tasks (CronJob)
- One time job application scenario: offline data processing, video decoding and other services
- Cronjob application scenario: notification, backup
[root@master ~]# cat job.yaml apiVersion: batch/v1 kind: Job metadata: name: test spec: template: spec: containers: - name: test image: perl command: ["perl","-Mbignum=bpi","-wle","print bpi(2000)"] restartPolicy: Never backoffLimit: 4 [root@master ~]# kubectl apply -f job.yaml job.batch/test created [root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-27xrt 0/1 ContainerCreating 0 10m28s <none> node2.example.com <none> <none> [root@node2 ~]# docker ps | grep test e55ac8842c89 registry.aliyuncs.com/google_containers/pause:3.6 "/pause" 6 minutes ago Up 6 minutes k8s_POD_test-27xrt_default_698b4c91-ef54-4fe9-b62b-e0abcujhts5o90_9 [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE test-27xrt 0/1 Completed 0 15m [root@node2 ~]# docker images | grep perl perl latest f9596eddf06f 5 days ago 568MB [root@master ~]# kubectl describe job/test Pods Statuses: 0 Active / 1 Succeeded / 0 Failed
[root@master ~]# kubelet --version Kubernetes v1.23.1 [root@master ~]# cat cronjob.yaml --- apiVersion: batch/v1 kind: CronJob metadata: name: hello spec: schedule: "*/1****" jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - -c - date;echo Hello world restartPolicy: OnFailure [root@master ~]# kubectl apply -f cronjob.yaml Warning: batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE hello-kihtwoab-kox6w 0/1 Completed 0 5m42s hello-kihtwoab-o96vw 0/1 Completed 0 90s hello-kihtwoab-kus6n 0/1 Completed 0 76s [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE hello-kihtwoab-kox6w 0/1 Completed 0 6m26s hello-kihtwoab-o96vw 0/1 Completed 0 2m11s hello-kihtwoab-kus6n 0/1 Completed 0 2m10s hello-kuhdoehs-ki8gr 0/1 ContainerCreating 0 36s [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE hello-kihtwoab-o96vw 0/1 Completed 0 2m11s hello-kihtwoab-kus6n 0/1 Completed 0 2m10s hello-kuhdoehs-ki8gr 0/1 Completed 0 45s [root@master ~]# kubectl describe cronjob/hello Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 5m36s (x133 over 135m) cronjob-controller (combined from similar events): Created job hello-kisgejxbw