kubernetes check recovery mechanism - PodPreset
1: Check recovery mechanism
Container health inspection and recovery mechanism
In k8s, you can define a health check "Probe" for the container in the Pod. kubelet will determine the status of the container according to the return value of the Probe, rather than directly based on whether the container is running or not. This mechanism is an important means to ensure the healthy survival of applications in the production environment
Command mode probe
Examples in Kubernetes documentation:
apiVersion: v1 kind: Pod metadata: labels: test: liveness name: test-liveness-exec spec: containers: - name: liveness image: daocloud.io/library/nginx args: - /bin/sh - -c - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 livenessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5
The first thing it does after starting is to create a health file in the / tmp directory as a sign that it is running normally. After 30 s econds, it will delete the file.
At the same time, such a liveness probe is defined. Its type is exec. After the container is started, it will execute a command specified by us in the container, such as "cat / TMP / health". At this time, if the file exists and the return value of the command is 0, Pod will think that the container has not only been started, but also healthy. This health check starts 5 seconds after the container is started (initialDelaySeconds: 5), and is performed every 5 seconds (periodSeconds: 5).
To create a Pod:
[root@master diandian]# kubectl create -f test-liveness-exec.yaml
To view the status of the Pod:
[root@master diandian]# kubectl get pod NAME READY STATUS RESTARTS AGE test-liveness-exec 1/1 Running 0 10s
After passing the health check, the Pod enters the Running status.
After 30 s econds, check the Events of Pod:
[root@master diandian]# kubectl describe pod test-liveness-exec
It is found that this Pod reports an exception in Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
Obviously, this health check detects that / TMP / health no longer exists, so it reports that the container is unhealthy. So what happens next?
Check the status of this Pod again:
# kubectl get pod test-liveness-exec NAME READY STATUS RESTARTS AGE liveness-exec 1/1 Running 1 1m
At this time, it is found that the Pod does not enter the Failed state, but maintains the Running state. Why?
The change of the RESTARTS field from 0 to 1 shows the reason: the exception container has been restarted by Kubernetes. During this process, Pod keeps Running status unchanged.
Note: there is no Stop semantics for Docker in Kubernetes. So although it was a Restart, the container was actually recreated.
This function is the Pod recovery mechanism in Kubernetes, also known as restart policy. It is a standard field (pod.spec.restartPolicy) in the spec part of Pod. The default value is Always, that is, whenever an exception occurs in this container, it will be recreated.
Tip:
The recovery process of Pod always occurs on the current Node, rather than running to other nodes. In fact, once a Pod is bound to a Node, it will never leave the Node unless the binding changes (the pod.spec.node field is modified). This means that if the host goes down, the Pod will not actively migrate to other nodes.
If you want the Pod to appear on other available nodes, you must use a "controller" such as Deployment to manage the Pod, even if you only need a Pod copy. This is the main difference between a single Pod Deployment and a Pod.
http get mode probe
[root@master diandian]# vim liveness-httpget.yaml apiVersion: v1 kind: Pod metadata: name: liveness-httpget-pod namespace: default spec: containers: - name: liveness-exec-container image: daocloud.io/library/nginx imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 livenessProbe: httpGet: port: http path: /index.html initialDelaySeconds: 1 periodSeconds: 3
Create this pod
[root@master diandian]# kubectl create -f liveness-httpget.yaml pod/liveness-httpget-pod created
View the status of the current pod
[root@master diandian]# kubectl describe pod liveness-httpget-pod ... Liveness: http-get http://:http/index.html delay=1s timeout=1s period=3s #success=1 #failure=3 ...
Test the index in the container HTML delete
Login container
[root@master diandian]# kubectl exec liveness-httpget-pod -c liveness-exec-container -it -- /bin/sh / # ls bin dev etc home lib media mnt proc root run sbin srv sys tmp usr var / # mv /usr/share/nginx/html/index.html index.html / # command terminated with exit code 137
As you can see, when you put the index After the html is removed, the container exits immediately.
At this point, view the information of the pod
[root@master diandian]# kubectl describe pod liveness-httpget-pod ... Normal Killing 1m kubelet, node02 Killing container with id docker://liveness-exec-container:Container failed liveness probe.. Container will be killed and recreated. ...
Look at the output. If the container fails the health check, the pod will be killed and recreated
[root@master diandian]# kubectl get pods NAME READY STATUS RESTARTS AGE liveness-httpget-pod 1/1 Running 1 33m restarts Is 1
Log in to the container again to view
Log in the container again and find index HTML appeared again, proving that the container was pulled again.
[root@master diandian]# kubectl exec liveness-httpget-pod -c liveness-exec-container -it -- /bin/sh / # cat /usr/share/nginx/html/index.html
Recovery strategy of Pod
You can change the recovery policy of Pod by setting restart policy. There are three types:
- Always: in any case, as long as the container is not in the running state, the container will be restarted automatically;
- OnFailure: automatically restart the container only when the container is abnormal;
- Never: never restart the container.
In actual use, these three recovery strategies need to be set reasonably according to the characteristics of application operation.
For example, for a Pod, it only calculates 1 + 1 = 2. After calculating the output result, it exits and becomes successful. At this time, if you use restartPolicy=Always to forcibly restart the Pod container, it will have no meaning.
If you want to care about the context after the container exits, such as logs, files and directories after the container exits, you need to set the restartPolicy to Never. Because once the container is automatically recreated, these contents may be lost (garbage collected).
The official document summarizes a lot of complex situations by comparing the restartPolicy with the state of the container in the Pod and the corresponding relationship between the Pod state. In fact, you don't need to memorize these correspondence at all. Just remember the following two basic design principles:
- As long as the policy specified in the restart policy of the Pod allows the restart of abnormal containers (such as Always), the Pod will remain in the Running state and restart the container. Otherwise, the Pod will enter the Failed state.
- For a Pod containing multiple containers, the Pod will enter the Failed state only after all the containers in it enter the abnormal state. Before that, the Pod was in Running status. At this time, the READY field of Pod will display the number of normal containers, such as:
[root@master diandian]# kubectl get pod test-liveness-exec NAME READY STATUS RESTARTS AGE liveness-exec 0/1 Running 1 1m
2: Kubernetes podpreset (available from 1.11-1.20, this function is cancelled)
PodPreset details
There are so many fields in Pod that it is impossible to remember them all. Can Kubernetes automatically fill in some fields for Pod?
For example, developers only need to submit a basic and very simple Pod YAML, and Kubernetes can automatically add other necessary information to the corresponding Pod object, such as labels, annotations, volumes, etc. These information can be defined in advance by the operation and maintenance personnel. In this way, the threshold for developers to write Pod YAML is greatly reduced.
A function called PodPreset (Pod preset) has appeared in V1 11 version of Kubernetes.
Understand Pod Preset
Pod Preset is an API resource. When a pod is created, users can use it to inject additional runtime requirement information into the pod. Use the label selector to specify the pod to which the Pod Preset applies. It is a tool object specially used for batch and automatic modification of pods. Using Pod Preset makes it unnecessary for pod template writers to explicitly set information for each pod. In this way, a pod template writer using a specific service does not need to know all the details of the service.
How does PodPreset work
Kubernetes provides an admission controller (PodPreset). When the controller is enabled, it will apply the Pod Preset to the received pod creation request. When a pod creation request occurs, the system will perform the following operations:
- Retrieve all available PodPresets.
- Check whether the label selector of PodPreset matches the label of the pod to be created.
- Try to merge various resources defined in PodPreset and inject the pod to be created.
- An event is thrown when an error occurs. This event records the pop information merge error and creates a pop without injecting pod preset information.
- Annotate the changed pod spec to indicate that it has been modified by PodPreset. Annotation form: podreset admission. kubernetes. io/PodPreset-": “”.
- A Pod may not match any Pod Preset, or it may match multiple Pod presets. At the same time, a PodPreset may not be applied to any Pod, or it may be applied to multiple pods. Kubernetes modifies the Pod spec when PodPreset is applied to one or more pods. Kubernetes modifies the specifications of all containers in the Pod for changes to Env, EnvFrom and VolumeMounts, and kubernetes modifies the Pod spec for changes to volumes.
Enable Pod Preset
In order to use Pod Preset in a cluster, you must ensure the following:
- api type settings. Is enabled k8s. io/v1alpha1/podpreset. (kubernets cluster installed in kubedm mode)
This can be done by including settings in the -- runtime config configuration item of the API server k8s. IO / v1alpha1 = true.
Add the following configuration to the API server configuration file: restart the API server service
[root@k8s-master ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml command: - --runtime-config=settings.k8s.io/v1alpha1=true [root@master manifests]# systemctl restart kubelet
[root@k8s-master ~]# kubectl api-versions admissionregistration.k8s.io/v1beta1 apiextensions.k8s.io/v1beta1 apiregistration.k8s.io/v1 apiregistration.k8s.io/v1beta1 apps/v1 apps/v1beta1 apps/v1beta2 authentication.k8s.io/v1 authentication.k8s.io/v1beta1 authorization.k8s.io/v1 authorization.k8s.io/v1beta1 autoscaling/v1 autoscaling/v2beta1 batch/v1 batch/v1beta1 ceph.rook.io/v1 certificates.k8s.io/v1beta1 events.k8s.io/v1beta1 extensions/v1beta1 networking.k8s.io/v1 policy/v1beta1 rbac.authorization.k8s.io/v1 rbac.authorization.k8s.io/v1beta1 rook.io/v1alpha2 scheduling.k8s.io/v1beta1 settings.k8s.io/v1alpha1 storage.k8s.io/v1 storage.k8s.io/v1beta1 v1
- Access controller PodPreset is enabled. (binary installation)
One way to enable is to include PodPreset in the -- enable admission plugins configuration item of the API server.
[root@master ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml --enable-admission-plugins=PodPreset,NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction [root@master ~]# systemctl restart kubelet
- Pod preset has been defined by creating a PodPreset object in the corresponding namespace.
case
Example: now developers have compiled the following pod Yaml file:
# vim pod.yaml apiVersion: v1 kind: Pod metadata: name: website labels: app: website role: frontend spec: containers: - name: website image: daocloud.io/library/nginx ports: - containerPort: 80
If the operation and maintenance personnel see this Pod, they will shake their heads again and again: This Pod can't be used in the production environment!
The operation and maintenance personnel can define a PodPreset object. In this object, any fields he wants to add to the Pod written by the developer can be predefined.
# vim preset.yaml apiVersion: settings.k8s.io/v1alpha1 kind: PodPreset metadata: name: allow-database spec: selector: matchLabels: role: frontend env: - name: DB_PORT value: "6379" volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {}
selector:
The selector here means that these additional definitions will only act on the Pod object with "role: frontend" tag defined by the selector, which can prevent "accidental injury".
Then, a set of standard fields in the Spec of Pod and corresponding values are defined.
For example:
DB is defined in env_ Port is an environment variable
volumeMounts defines the mount directory of the container Volume
volumes defines a Volume of emptyDir.
Next, suppose that the operation and maintenance personnel create the PodPreset first, and then the developers create the Pod:
# kubectl create -f preset.yaml # kubectl create -f pod.yaml
After the pod runs, view the API object of the Pod:
# kubectl get pod website -o yaml apiVersion: v1 kind: Pod metadata: name: website labels: app: website role: frontend annotations: podpreset.admission.kubernetes.io/podpreset-allow-database: "resource version" spec: containers: - name: website image: nginx volumeMounts: - mountPath: /cache name: cache-volume ports: - containerPort: 80 env: - name: DB_PORT value: "6379" volumes: - name: cache-volume emptyDir: {}
It is clear that the newly added definitions of labels, env, volumes and volumeMount are added to this Pod, and their configuration is the same as that of PodPreset. In addition, an annotation is automatically added to the Pod, indicating that the Pod object has been changed by PodPreset.
be careful:
The content defined in PodPreset will only be appended to the Pod API object itself before it is created, and will not affect the definition of any Pod controller.
For example, if an nginx Deployment is submitted now, the Deployment object itself will never be changed by PodPreset, and only all pods created by the Deployment will be modified.
Here's a question: what happens if you define multiple podpresets that act on a Pod object at the same time?
The Kubernetes project will help you Merge the changes to be made by the two podpresets. If the changes they want to make conflict, these conflict fields will not be modified.
3: Create a Deployment using yaml
k8s deployment resource creation process
- The user creates a Deployment through kubectl.
- Deployment creates a ReplicaSet.
- ReplicaSet creates a Pod.
brief introduction
Deployment is a new generation object that defines and manages multi replica applications (i.e. multi replica pods). Compared with Replication Controller, it provides more complete functions and is easier to use.
If a Pod fails, the corresponding service will also hang up. Therefore, Kubernetes provides a concept of Deployment. The purpose is to let Kubernetes manage a group of Pod replicas, that is, replica sets, so as to ensure that a certain number of replicas are available all the time, and the whole service will not hang up because a Pod hangs.
Deployment is also responsible for Rolling Update for each copy when the Pod definition changes.
In this way, the method of using one API object (deployment) to manage another API object (Pod) is called "controller pattern" in k8s. Deployment plays the role of Pod controller.
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 volumeMounts: - mountPath: "/usr/share/nginx/html" name: nginx-vol volumes: - name: nginx-vol emptyDir: {}
END