K8s Resource Pod (first article)

Resource configuration format:
  • apiVersion: The version used to define an api group.
  • Kind: Used to define resource types.
  • Metadata: Used to define metadata, what it contains, the name of a resource, its tags, the namespace to which it belongs, and so on.
  • sepc:
  • Used to define the expected state of a resource.
  • status: The actual state of the resource, which the user cannot define, is maintained by k8s.
Get all the resource types supported by the cluster:
[root@k8smaster data]# kubectl api-resources
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
events                            ev                                          true         Event
limitranges                       limits                                      true         LimitRange
namespaces                        ns                                          false        Namespace
nodes                             no                                          false        Node
persistentvolumeclaims            pvc                                         true         PersistentVolumeClaim
persistentvolumes                 pv                                          false        PersistentVolume
pods                              po                                          true         Pod
podtemplates                                                                  true         PodTemplate
replicationcontrollers            rc                                          true         ReplicationController
resourcequotas                    quota                                       true         ResourceQuota
secrets                                                                       true         Secret
serviceaccounts                   sa                                          true         ServiceAccount
services                          svc                                         true         Service
mutatingwebhookconfigurations                  admissionregistration.k8s.io   false        MutatingWebhookConfiguration
validatingwebhookconfigurations                admissionregistration.k8s.io   false        ValidatingWebhookConfiguration
customresourcedefinitions         crd,crds     apiextensions.k8s.io           false        CustomResourceDefinition
apiservices                                    apiregistration.k8s.io         false        APIService
controllerrevisions                            apps                           true         ControllerRevision
daemonsets                        ds           apps                           true         DaemonSet
deployments                       deploy       apps                           true         Deployment
replicasets                       rs           apps                           true         ReplicaSet
statefulsets                      sts          apps                           true         StatefulSet
tokenreviews                                   authentication.k8s.io          false        TokenReview
localsubjectaccessreviews                      authorization.k8s.io           true         LocalSubjectAccessReview
selfsubjectaccessreviews                       authorization.k8s.io           false        SelfSubjectAccessReview
selfsubjectrulesreviews                        authorization.k8s.io           false        SelfSubjectRulesReview
subjectaccessreviews                           authorization.k8s.io           false        SubjectAccessReview
horizontalpodautoscalers          hpa          autoscaling                    true         HorizontalPodAutoscaler
cronjobs                          cj           batch                          true         CronJob
jobs                                           batch                          true         Job
certificatesigningrequests        csr          certificates.k8s.io            false        CertificateSigningRequest
leases                                         coordination.k8s.io            true         Lease
events                            ev           events.k8s.io                  true         Event
daemonsets                        ds           extensions                     true         DaemonSet
deployments                       deploy       extensions                     true         Deployment
ingresses                         ing          extensions                     true         Ingress
networkpolicies                   netpol       extensions                     true         NetworkPolicy
podsecuritypolicies               psp          extensions                     false        PodSecurityPolicy
replicasets                       rs           extensions                     true         ReplicaSet
ingresses                         ing          networking.k8s.io              true         Ingress
networkpolicies                   netpol       networking.k8s.io              true         NetworkPolicy
runtimeclasses                                 node.k8s.io                    false        RuntimeClass
poddisruptionbudgets              pdb          policy                         true         PodDisruptionBudget
podsecuritypolicies               psp          policy                         false        PodSecurityPolicy
clusterrolebindings                            rbac.authorization.k8s.io      false        ClusterRoleBinding
clusterroles                                   rbac.authorization.k8s.io      false        ClusterRole
rolebindings                                   rbac.authorization.k8s.io      true         RoleBinding
roles                                          rbac.authorization.k8s.io      true         Role
priorityclasses                   pc           scheduling.k8s.io              false        PriorityClass
csidrivers                                     storage.k8s.io                 false        CSIDriver
csinodes                                       storage.k8s.io                 false        CSINode
storageclasses                    sc           storage.k8s.io                 false        StorageClass
volumeattachments                              storage.k8s.io                 false        VolumeAttachment
Kubectl operation type:
  • Declarative operation: Configure resources by command (create, delete...).
  • Declarative operation: Write the configuration list in a file and let k8s apply the configuration in the file (kubectl apply -f xxx.yaml).
Official Operations Documentation: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/
Create resources:
apply and create:
[root@k8smaster ~]# mkdir /data
[root@k8smaster ~]# cd /data/
[root@k8smaster data]# vim develop-ns.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: develop
[root@k8smaster data]# kubectl create -f develop-ns.yaml 
namespace/develop created
[root@k8smaster data]# kubectl get ns
NAME              STATUS   AGE
default           Active   36d
develop           Active   31s
kube-node-lease   Active   36d
kube-public       Active   36d
kube-system       Active   36d
[root@k8smaster data]# cp develop-ns.yaml sample-ns.yaml
[root@k8smaster data]# vim sample-ns.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: sample
[root@k8smaster data]# kubectl apply -f sample-ns.yaml 
namespace/sample created
[root@k8smaster data]# kubectl get ns
NAME              STATUS   AGE
default           Active   36d
develop           Active   2m7s
kube-node-lease   Active   36d
kube-public       Active   36d
kube-system       Active   36d
sample            Active   4s
[root@k8smaster data]# kubectl create -f develop-ns.yaml 
Error from server (AlreadyExists): error when creating "develop-ns.yaml": namespaces "develop" already exists
[root@k8smaster data]# kubectl apply -f sample-ns.yaml 
namespace/sample unchanged
  • create: You cannot duplicate the creation of an existing resource.
  • Apply:apply configures for application, creates if resource does not exist, applies configuration of configuration manifest if resource is different from configuration manifest, can be applied repeatedly, and apply can apply all configuration files in one directory.
Output:
Output the configuration of the pod as a yaml template:
[root@k8smaster data]# kubectl get pods/nginx-deployment-6f77f65499-8g24d -o yaml --export
Flag --export has been deprecated, This flag is deprecated and will be removed in future.
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  generateName: nginx-deployment-6f77f65499-
  labels:
    app: nginx-deployment
    pod-template-hash: 6f77f65499
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: nginx-deployment-6f77f65499
    uid: c22cc3e8-8fbe-420f-b517-5a472ba1ddef
  selfLink: /api/v1/namespaces/default/pods/nginx-deployment-6f77f65499-8g24d
spec:
  containers:
  - image: nginx
    imagePullPolicy: Always
    name: nginx
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-kk2fq
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: k8snode1
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-kk2fq
    secret:
      defaultMode: 420
      secretName: default-token-kk2fq
status:
  phase: Pending
  qosClass: BestEffort
Run multiple containers:
[root@k8smaster data]# vim pod-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: default
spec:
  containers:
  - name: bbox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c","sleep 86400"]
  - name: myapp
    image: ikubernetes/myapp:v1
[root@k8smaster data]# kubectl apply -f pod-demo.yaml 
pod/pod-demo created
[root@k8smaster data]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
myapp-deployment-558f94fb55-plk4v   1/1     Running   1          33d
myapp-deployment-558f94fb55-rd8f5   1/1     Running   1          33d
myapp-deployment-558f94fb55-zzmpg   1/1     Running   1          33d
nginx-deployment-6f77f65499-8g24d   1/1     Running   1          33d
pod-demo                            2/2     Running   0          83s
Enter the container:
kubectl exec
  • pod-demo:The name of the pod.
  • -c: If there are more than one container in a pod, you need to specify the container name with-c to enter the specified container.
  • -n: Specify the namespace.
  • -it: interactive interface.
  • --/bin/sh: Running shell.

[root@k8smaster data]# kubectl exec pod-demo -c bbox -n default -it  -- /bin/sh
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr DE:5A:59:84:21:8D  
        inet addr:10.244.1.105  Bcast:0.0.0.0  Mask:255.255.255.0    
        UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
        RX packets:13 errors:0 dropped:0 overruns:0 frame:0
        TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
        collisions:0 txqueuelen:0 
        RX bytes:978 (978.0 B)  TX bytes:42 (42.0 B)

lo        Link encap:Local Loopback  
        inet addr:127.0.0.1  Mask:255.0.0.0
        UP LOOPBACK RUNNING  MTU:65536  Metric:1
        RX packets:0 errors:0 dropped:0 overruns:0 frame:0
        TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
        collisions:0 txqueuelen:1000 
        RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

_The IP in the bbox container is the IP of the Pod, and all the containers in a Pod are accessed by sharing an ip.

/ # netstat -tnl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      
/ # ps aux
PID   USER     TIME  COMMAND
    1 root      0:00 sleep 86400
    6 root      0:00 /bin/sh
    13 root      0:00 ps aux

_View through the netstat-tnl command that the container listens on port 80, which is obviously not the port in the bbox, but the port 80 of the myapp container.This proves that all containers in a Pod use a shared ip.

/ # wget -O -  -q 127.0.0.1   #And when you access the local port, a page for myapp appears.
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
/ # exit
View the log of the specified container:
kubectl logs:
  • pod-demo:The name of the pod.
  • -c: If there are more than one container in a pod, you need to specify the container name with-c to enter the specified container.
  • -n: Specify the namespace.

[root@k8smaster data]# kubectl logs pod-demo -n default -c myapp 
127.0.0.1 - - [04/Dec/2019:06:36:04 +0000] "GET / HTTP/1.1" 200 65 "-" "Wget" "-"
127.0.0.1 - - [04/Dec/2019:06:36:11 +0000] "GET / HTTP/1.1" 200 65 "-" "Wget" "-"
127.0.0.1 - - [04/Dec/2019:06:36:17 +0000] "GET / HTTP/1.1" 200 65 "-" "Wget" "-"
Containers in Pod and node hosts share the same network:

_It is not recommended to use during production. If there are too many containers, port number conflicts may occur.

[root@k8smaster data]# vim host-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: mypod
  namespace: default
spec:
  containers:
  - name : myapp
    image: ikubernetes/myapp:v1
  hostNetwork: true
[root@k8smaster data]# kubectl apply -f host-pod.yaml 
pod/mypod created
[root@k8smaster data]# kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP               NODE       NOMINATED NODE   READINESS GATES
myapp-deployment-558f94fb55-plk4v   1/1     Running   1          33d   10.244.2.94      k8snode2   <none>           <none>
myapp-deployment-558f94fb55-rd8f5   1/1     Running   1          33d   10.244.2.95      k8snode2   <none>           <none>
myapp-deployment-558f94fb55-zzmpg   1/1     Running   1          33d   10.244.1.104     k8snode1   <none>           <none>
mypod                               1/1     Running   0          9s    192.168.43.176   k8snode2   <none>           <none>
nginx-deployment-6f77f65499-8g24d   1/1     Running   1          33d   10.244.1.103     k8snode1   <none>           <none>
pod-demo                            2/2     Running   0          26m   10.244.1.105     k8snode1   <none>           <none>
[root@k8smaster data]# curl 192.168.43.176:80
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
hostPort:
[root@k8smaster data]# kubectl delete -f host-pod.yaml 
pod "mypod" deleted
[root@k8smaster data]# vim host-pod.yaml    #Map port 80 of the container to port 8080 of the host on which the container is running.

apiVersion: v1
kind: Pod
metadata:
  name: mypod
  namespace: default
spec:
  containers:
  - name : myapp
    image: ikubernetes/myapp:v1
    ports:
    - protocol: TCP
      containerPort: 80
      name: http
      hostPort: 8080
[root@k8smaster data]# kubectl apply -f host-pod.yaml 
pod/mypod created
[root@k8smaster data]# kubectl get pods -o wide   #Discovering that myapp is running on the node2 node, we directly access port 8080 of node2.
NAME                                READY   STATUS    RESTARTS   AGE   IP             NODE       NOMINATED NODE   READINESS GATES
myapp-deployment-558f94fb55-plk4v   1/1     Running   1          33d   10.244.2.94    k8snode2   <none>           <none>
myapp-deployment-558f94fb55-rd8f5   1/1     Running   1          33d   10.244.2.95    k8snode2   <none>           <none>
myapp-deployment-558f94fb55-zzmpg   1/1     Running   1          33d   10.244.1.104   k8snode1   <none>           <none>
mypod                               1/1     Running   0          23s   10.244.2.96    k8snode2   <none>           <none>
nginx-deployment-6f77f65499-8g24d   1/1     Running   1          33d   10.244.1.103   k8snode1   <none>           <none>
pod-demo                            2/2     Running   0          37m   10.244.1.105   k8snode1   <none>           <none>
[root@k8smaster data]# curl 192.168.43.176:8080   
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
How Pod can be accessed externally:
  • Containers/hostPort: Port mapping that maps the port of a container to the node host running the container.
  • Host Network: Pod shares the same network with the local machine.
  • Nodeport: A specific port is open on each node, and any traffic sent to that port is forwarded to the corresponding service, with ports ranging from 30000 to 32767.
Label:

_Labels are "key value" types of data that can be used to specify directly when a resource is created or to add active objects whenever needed, which can then be checked by a tag selector to complete the resource selection.

  • An object can have more than one tag, and the same tag can be added to more than one resource.
  • In practice, multiple labels of different latitudes can be attached to a resource to achieve flexible resource grouping management functions, such as version labels, environment labels, hierarchical architecture labels, etc., to cross-identify different versions, environments, and architecture levels of the same resource.
  • A key name in a tag usually consists of a key prefix and a key name, where the key prefix is optional and is formatted like "KEY_PREFIX/KEY_NAME".
    • Keys can use up to 63 characters, including letters, numbers, connection numbers, underscores, dots, and can only start with letters or numbers.
    • The key prefix must be in DNS subdomain name format and cannot exceed 253 characters. When omitting the key prefix, the key will be treated as the user's private data. However, keys automatically added for user resources by k8s system components or third-party components must use the key prefix, while the "kubernetes.io/" prefix is reserved for use by the core components of kubernetes.
    • The key value in a label must be no more than 63 characters. It must either be empty or begin and end with a letter or a number, with data using only letters, numbers, connection numbers, underscores, dots, and so on.

  • rel: The released version.
  • Stable: stable version.
  • beta: beta.
  • canary: canary version.
Label selector:

_Tag selectors are used to express query criteria or selection criteria for tags, and the Kubernetes API currently supports two selectors.

  • Based on equality-based
    • Operators are =, == and!=three, of which the first two have the same meaning, indicating an equivalence relationship and the last one an inequality relationship.
  • set-based
    • KEY in (Value1,Value2...)
    • KEY not in (Vlaue1,Value2...)
    • KEY: All resources that have this key name label.
    • ! KEY: All resources that do not have this key name label.

_Logic to follow when using tag selection:

  • The logical relationship between multiple selectors specified at the same time is an AND operation.
  • Label selectors with null values mean that each resource object will be selected.
  • An empty label selector will not be able to select any resources.

How to define the label selector:

Many resource objects of kubernetes must be associated with Pod resource objects in the form of label selectors, such as services, Deployment, and ReplicaSet-type resources. They use nested "selector" fields in spec fields, specify label selectors through "matchLabbels", or even support the use of "matchExpressions" to construct complex label selectors.System.

  • matchLabels: Specify a label selector by directly giving a key-value pair.
  • Match Expressions: A list of label selectors specified based on an expression, each of which is in the form of'{key:KEY_NAME, operator:OPERATOR,values:[VALUE1,VALUE2...]}', with Logic and Relationships between the selector lists.
    • When using an In or NotIn operation, its value must not be an empty list of strings, but when using Exists or DostNotExist, its value must be empty.
Manage tags (create, modify, delete):
[root@k8smaster ~]# kubectl get pods --show-labels    #View labels for pod resources.
NAME                                READY   STATUS    RESTARTS   AGE   LABELS
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
mypod                               1/1     Running   1          45h   <none>
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d   app=nginx-deployment,pod-template-hash=6f77f65499
pod-demo                            2/2     Running   2          46h   <none>
[root@k8smaster data]# vim pod-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: default
  labels:           
    app: pod-demo        #Define an app tag with a value of pod-demo.
    rel: stable          #Define a rel tag, position stable.
spec:
  containers:
  - name: bbox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c","sleep 86400"]
  - name: myapp
    image: ikubernetes/myapp:v1
[root@k8smaster data]# kubectl apply -f pod-demo.yaml  #Create labels.
pod/pod-demo configured
[root@k8smaster data]# kubectl get pods --show-labels
NAME                                READY   STATUS    RESTARTS   AGE   LABELS
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
mypod                               1/1     Running   1          46h   <none>
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d   app=nginx-deployment,pod-template-hash=6f77f65499
pod-demo                            2/2     Running   2          46h   app=pod-demo,rel=stable
[root@k8smaster data]# kubectl label pods pod-demo -n default tier=frontend  #Use statements to add tags.
pod/pod-demo labeled
[root@k8smaster data]# kubectl get pods --show-labels
NAME                                READY   STATUS    RESTARTS   AGE   LABELS
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
mypod                               1/1     Running   1          46h   <none>
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d   app=nginx-deployment,pod-template-hash=6f77f65499
pod-demo                            2/2     Running   2          47h   app=pod-demo,rel=stable,tier=frontend
[root@k8smaster data]# kubectl label pods pod-demo -n default --overwrite app=myapp   #Overwrite (modify) existing labels.
pod/pod-demo labeled
[root@k8smaster data]# kubectl get pods --show-labels
NAME                                READY   STATUS    RESTARTS   AGE   LABELS
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
mypod                               1/1     Running   1          46h   <none>
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d   app=nginx-deployment,pod-template-hash=6f77f65499
pod-demo                            2/2     Running   2          47h   app=myapp,rel=stable,tier=frontend
[root@k8smaster data]# kubectl label pods pod-demo -n default rel-  #Delete the specified label by placing a minus sign after the label name.
pod/pod-demo labeled
[root@k8smaster data]# kubectl get pods --show-labels
NAME                                READY   STATUS    RESTARTS   AGE   LABELS
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
mypod                               1/1     Running   1          46h   <none>
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d   app=nginx-deployment,pod-template-hash=6f77f65499
pod-demo                            2/2     Running   2          47h   app=myapp,tier=frontend
Filter resources using tags:
#View resources with label app equal to myapp.
[root@k8smaster data]# kubectl get pods -n default -l app=myapp  
NAME       READY   STATUS    RESTARTS   AGE
pod-demo   2/2     Running   2          47h

#View resources whose label app is not equal to myapp.
[root@k8smaster data]# kubectl get pods -n default -l app!=myapp --show-labels 
NAME                                READY   STATUS    RESTARTS   AGE   LABELS
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
mypod                               1/1     Running   1          46h   <none>
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d   app=nginx-deployment,pod-template-hash=6f77f65499

#View resources labeled nginx-deployment and my app-deployment.
[root@k8smaster data]# kubectl get pods -n default -l "app in (nginx-deployment,myapp-deployment)" --show-labels  
NAME                                READY   STATUS    RESTARTS   AGE   LABELS
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d   app=myapp-deployment,pod-template-hash=558f94fb55
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d   app=nginx-deployment,pod-template-hash=6f77f65499

#-L: Output the app tag as a field
[root@k8smaster data]# kubectl get pods -n default -l "app in (nginx-deployment,myapp-deployment)"  -L app
NAME                                READY   STATUS    RESTARTS   AGE   APP
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d   myapp-deployment
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d   myapp-deployment
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d   myapp-deployment
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d   nginx-deployment

#The view label app is not a resource for nginx-deployment and myapp-deployment.
[root@k8smaster data]# kubectl get pods -n default -l "app notin (nginx-deployment,myapp-deployment)"  -L app
NAME       READY   STATUS    RESTARTS   AGE   APP
mypod      1/1     Running   1          47h   
pod-demo   2/2     Running   2          47h   myapp

#View resources with app tags.
[root@k8smaster data]# kubectl get pods -n default -l "app"  -L app
NAME                                READY   STATUS    RESTARTS   AGE   APP
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d   myapp-deployment
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d   myapp-deployment
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d   myapp-deployment
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d   nginx-deployment
pod-demo                            2/2     Running   2          47h   myapp

#View resources that do not have app tags.
[root@k8smaster data]# kubectl get pods -n default -l '!app'  -L app
NAME    READY   STATUS    RESTARTS   AGE   APP
mypod   1/1     Running   1          47h   
Resource annotation:

_Comments are also "key value" type data, but they cannot be used for tagging and selecting kubernetes objects, only for providing "metadata" information to resources.

_Metadata in annotations is not limited by the number of characters, can be large or small, can be structured or unstructured, and supports the use of other characters that are prohibited in labels.

_When introducing new fields to a resource in a new version of kubernetes (Alpha or Beta phase), they are often provided as annotations to avoid confusing users with changes such as additions or deletions. Once support for their use is established, these new fields are introduced into the resource and the associated annotations are eliminated.

[root@k8smaster data]# vim pod-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
  namespace: default
  labels:           
    app: pod-demo        
    rel: stable
  annotations:
    ik8s.io/project: hello
spec:
  containers:
  - name: bbox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c","sleep 86400"]
  - name: myapp
    image: ikubernetes/myapp:v1
[root@k8smaster data]# kubectl apply -f pod-demo.yaml 
pod/pod-demo configured
[root@k8smaster data]# kubectl describe pods pod-demo -n default
Name:         pod-demo
Namespace:    default
Priority:     0
Node:         k8snode1/192.168.43.136
Start Time:   Wed, 04 Dec 2019 14:20:01 +0800
Labels:       app=pod-demo
                rel=stable
                tier=frontend
Annotations:  ik8s.io/project: hello
                kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"ik8s.io/project":"hello"},"labels":{"app":"pod-demo","rel":"stable"},"name":"p...
Status:       Running
IP:           10.244.1.106
IPs:          <none>
Pod life cycle:
Status:
  • Pending: The Pod has been accepted by the Kubernetes system, but one or more container mirrors have not been created and it will take time for the Pod and download mirrors to be scheduled before the Pod can run.
  • Running: The Pod is bound to a node, all containers in the Pod have been created, at least one container is running, or is in a boot and restart state.
  • Succeeded: All containers in the Pod are terminated successfully and will not be restarted.
  • Failed: All containers in the Pod have terminated, and at least one container is the only failed termination.
  • Unkonwn: The state of a Pod cannot be obtained for some reason, usually because the Pod fails to communicate with the host.
Pod start process:

  • The user creates a Pod through a command or a yaml file and submits a request to the API Server.
  • API Server stores information such as Pod's configuration in the etcd.
  • API Server then notifies Scheduler of the Watch time.
  • Scheduler uses a scheduling algorithm to select the target host.
  • API Server then stores the dispatched host information in the etcd.
  • API Server notifies the dispatching host Kubelet, Kubelet goes to etcd to read the configuration and attributes of the corresponding Pod, and then communicates these configuration and attribute information to the docker engine.
  • The docker engine launches a container, and upon successful startup, the container's information is submitted to the API Server.
  • API Server then saves the actual state of the Pod to etcd.
  • Then, if the actual state and the expected state change, a series of collaborative work will occur between them.
  • Complete the initialization of the container.
  • Execute post start content after the main container starts.
  • Probe after the main container is running (LiveinessProbe and ReadinessProbe).
Container probe:
  • Liveness Probe: Indicates whether the container is running, if the Survival Probe fails, kubelet kills the container, and the container receives a restart policy. If the restart fails, the restart continues, and the restart interval increases in turn until it is specified as successful.If the container does not provide a survival probe, the default state is Success.
  • ReadinessProbe: Indicates, oh, if it is ready for Service requests, if ready probing fails, the endpoint controller will remove the IP address of the Pod from all Service endpoints where the Pod matches. The ready state before the initial delay defaults to Failure, and the default state is Success if the container does not provide a ready probe.
Three processes for probes:
  • ExecAction: Executes the specified command in a container and considers the diagnosis successful if the command exits with a return code of 0.
  • HTTPGetAction: Performs an HTTP Get request on the container IP address on the specified port and path, and if the status code of the response is greater than or equal to 200 and less than 400, the diagnosis is considered successful.
  • TCPSocketAction: Performs a TCP check on the IP address of a container on a specified port, and if the port is open, the diagnosis is considered successful.
Detection state structure:
  • Success: The container passed the diagnosis.
  • Failure: Container failed diagnosis.
  • Unknown: Diagnosis failed, no action will be taken.
Health Status Detection LivenessProbe:
ExecAction instance:

_We create a check file by touch ing and use the test command to verify that the file exists and survives. After creating the file, the container sleeps for 30 seconds, and after 30 seconds the container deletes the file. At this time, the test command returns a result of 1, assuming the container is not alive, and a restart policy is applied to the container. After restarting, the container recreates the original file, and the test command returns 0 to confirm that the restart was successful.

[root@k8smaster data]# git clone https://github.com/iKubernetes/Kubernetes_Advanced_Practical.git
Cloning into 'Kubernetes_Advanced_Practical'...
remote: Enumerating objects: 489, done.
remote: Total 489 (delta 0), reused 0 (delta 0), pack-reused 489
Receiving objects: 100% (489/489), 148.75 KiB | 110.00 KiB/s, done.
Resolving deltas: 100% (122/122), done.
[root@k8smaster chapter4]# cat liveness-exec.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness-exec
  name: liveness-exec
spec:
  containers:
  - name: liveness-demo
    image: busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
    livenessProbe:
      exec:
        command:
        - test
        - -e
        - /tmp/healthy
[root@k8smaster chapter4]# kubectl describe pods liveness-exec
Name:         liveness-exec
Namespace:    default
Priority:     0
Node:         k8snode2/192.168.43.176
Start Time:   Fri, 06 Dec 2019 15:27:55 +0800
Labels:       test=liveness-exec
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"test":"liveness-exec"},"name":"liveness-exec","namespace":"default...
Status:       Running
IP:           10.244.2.100
IPs:          <none>
Containers:
  liveness-demo:
    Container ID:  docker://f91cec7c45f5a025e049b2d2e0b1dc15593e9c35f183a9a9aa8e09d40f22df4f
    Image:         busybox
    Image ID:      docker-pullable://busybox@sha256:24fd20af232ca4ab5efbf1aeae7510252e2b60b15e9a78947467340607cd2ea2
    Port:          <none>
    Host Port:     <none>
    Args:
      /bin/sh
      -c
      touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
    State:          Running
      Started:      Fri, 06 Dec 2019 15:28:02 +0800
    Ready:          True
    Restart Count:  0   #At this point the restart count is 0, wait 30 seconds, and we'll check again
    Liveness:       exec [test -e /tmp/healthy] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
Mounts:
  /var/run/secrets/kubernetes.io/serviceaccount from default-token-kk2fq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-kk2fq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-kk2fq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
             node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  43s   default-scheduler  Successfully assigned default/liveness-exec to k8snode2
  Normal   Pulling    42s   kubelet, k8snode2  Pulling image "busybox"
  Normal   Pulled     36s   kubelet, k8snode2  Successfully pulled image "busybox"
  Normal   Created    36s   kubelet, k8snode2  Created container liveness-demo
  Normal   Started    36s   kubelet, k8snode2  Started container liveness-demo
  Warning  Unhealthy  4s    kubelet, k8snode2  Liveness probe failed:
[root@k8smaster chapter4]# kubectl describe pods liveness-exec
Name:         liveness-exec
Namespace:    default
Priority:     0
Node:         k8snode2/192.168.43.176
Start Time:   Fri, 06 Dec 2019 15:27:55 +0800
Labels:       test=liveness-exec
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"test":"liveness-exec"},"name":"liveness-exec","namespace":"default...
Status:       Running
IP:           10.244.2.100
IPs:          <none>
Containers:
  liveness-demo:
    Container ID:  docker://96f7dfd4ef1df503152542fdd1336fd0153773fb7dde3ed32f4388566888d6f0
    Image:         busybox
    Image ID:      docker-pullable://busybox@sha256:24fd20af232ca4ab5efbf1aeae7510252e2b60b15e9a78947467340607cd2ea2
    Port:          <none>
    Host Port:     <none>
    Args:
      /bin/sh
      -c
      touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
    State:          Running
      Started:      Fri, 06 Dec 2019 15:29:25 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Fri, 06 Dec 2019 15:28:02 +0800
      Finished:     Fri, 06 Dec 2019 15:29:24 +0800
    Ready:          True
    Restart Count:  1    #A restart count of 1 was found.
    Liveness:       exec [test -e /tmp/healthy] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kk2fq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-kk2fq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-kk2fq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  2m25s                default-scheduler  Successfully   assigned default/liveness-exec to k8snode2
  Normal   Killing    86s                  kubelet, k8snode2  Container liveness-demo failed liveness probe, will be restarted
  Normal   Pulling    56s (x2 over 2m24s)  kubelet, k8snode2  Pulling image "busybox"
  Normal   Pulled     56s (x2 over 2m18s)  kubelet, k8snode2  Successfully pulled image "busybox"
  Normal   Created    56s (x2 over 2m18s)  kubelet, k8snode2  Created container liveness-demo
  Normal   Started    55s (x2 over 2m18s)  kubelet, k8snode2  Started container liveness-demo
  Warning  Unhealthy  6s (x5 over 106s)    kubelet, k8snode2  Liveness probe failed:
[root@k8smaster chapter4]# kubectl delete -f liveness-exec.yaml 
pod "liveness-exec" deleted
HTTPGetAction instance:

_After successful startup of the nginx container through postStart, we create a health check file in the root directory of the web page file, and then request the health check file through HTTPGet. If the request is successful, the service is considered healthy, otherwise the service is unhealthy.

[root@k8smaster chapter4]# cat liveness-http.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: liveness
  name: liveness-http
spec:
  containers:
  - name: liveness-demo
    image: nginx:1.12-alpine
    ports:
    - name: http
      containerPort: 80
    lifecycle:
      postStart:
        exec:
          command:
          - /bin/sh
          - -c
          - 'echo Healty > /usr/share/nginx/html/healthz'
    livenessProbe:
      httpGet:
        path: /healthz
        port: http
        scheme: HTTP
      periodSeconds: 2          #Health examination interval 2 seconds
      failureThreshold: 2       #Error failures, no restart more than 2 times.
      initialDelaySeconds: 3     #Initialization delay, which waits 3 seconds for initialization after the container starts successfully
[root@k8smaster chapter4]# kubectl get pods 
NAME                                READY   STATUS    RESTARTS   AGE
liveness-http                       1/1     Running   0          23s
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d
mypod                               1/1     Running   1          2d1h
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d
pod-demo                            2/2     Running   2          2d2h
[root@k8smaster chapter4]# kubectl describe pods liveness-http
Name:         liveness-http
Namespace:    default
Priority:     0
Node:         k8snode2/192.168.43.176
Start Time:   Fri, 06 Dec 2019 16:28:26 +0800
Labels:       test=liveness
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"test":"liveness"},"name":"liveness-http","namespace":"default"},"s...
Status:       Running
IP:           10.244.2.101
IPs:          <none>
  Containers:
    liveness-demo:
    Container ID:   docker://cc2d4ad2e37ec04b0d629c15d3033ecf9d4ab7453349ab40def9eb8cfca28936
    Image:          nginx:1.12-alpine
    Image ID:       docker-pullable://nginx@sha256:3a7edf11b0448f171df8f4acac8850a55eff30d1d78c46cd65e7bc8260b0be5d
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 06 Dec 2019 16:28:27 +0800
    Ready:          True
    Restart Count:  0     #At this point the restart count is 0, we enter the container and test by deleting the check file artificially.
    Liveness:       http-get http://:http/healthz delay=3s timeout=1s period=2s #success=1 #failure=2
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kk2fq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
default-token-kk2fq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-kk2fq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  70s   default-scheduler  Successfully assigned default/liveness-http to k8snode2
  Normal  Pulling    70s   kubelet, k8snode2  Pulling image "nginx:1.12-alpine"
  Normal  Pulled     69s   kubelet, k8snode2  Successfully pulled image "nginx:1.12-alpine"
  Normal  Created    69s   kubelet, k8snode2  Created container liveness-demo
  Normal  Started    69s   kubelet, k8snode2  Started container liveness-demo
[root@k8smaster chapter4]# kubectl exec -it liveness-http -- /bin/sh
/ # cd /usr/share/nginx/html/
/usr/share/nginx/html # ls
50x.html    healthz     index.html
/usr/share/nginx/html # rm -rf healthz 
/usr/share/nginx/html # exit
[root@k8smaster chapter4]# kubectl exec -it liveness-http -- /bin/sh
/ # cd /usr/share/nginx/html/
/usr/share/nginx/html # ls
50x.html    healthz     index.html
/usr/share/nginx/html # rm -rf healthz 
/usr/share/nginx/html # exit
[root@k8smaster chapter4]# kubectl describe pods liveness-http
Name:         liveness-http
Namespace:    default
Priority:     0
Node:         k8snode2/192.168.43.176
Start Time:   Fri, 06 Dec 2019 16:28:26 +0800
Labels:       test=liveness
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"test":"liveness"},"name":"liveness-http","namespace":"default"},"s...
Status:       Running
IP:           10.244.2.101
IPs:          <none>
  Containers:
    liveness-demo:
    Container ID:   docker://90f3016f707bcfc0e22c42dac54f4e4691e74db7fcc5d5d395f22c482c9ea704
    Image:          nginx:1.12-alpine
    Image ID:       docker-pullable://nginx@sha256:3a7edf11b0448f171df8f4acac8850a55eff30d1d78c46cd65e7bc8260b0be5d
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 06 Dec 2019 16:34:34 +0800
    Last State:     Terminated
      Reason:       Completed
    Exit Code:    0
    Started:      Fri, 06 Dec 2019 16:28:27 +0800
    Finished:     Fri, 06 Dec 2019 16:34:33 +0800
    Ready:          True
    Restart Count:  1  #A restart count of 1 was found.
    Liveness:       http-get http://:http/healthz delay=3s timeout=1s period=2s #success=1 #failure=2
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kk2fq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-kk2fq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-kk2fq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  6m24s                default-scheduler  Successfully assigned default/liveness-http to k8snode2
  Normal   Pulling    6m24s                kubelet, k8snode2  Pulling image "nginx:1.12-alpine"
  Normal   Pulled     6m23s                kubelet, k8snode2  Successfully pulled image "nginx:1.12-alpine"
  Warning  Unhealthy  17s (x2 over 19s)    kubelet, k8snode2  Liveness probe failed: HTTP probe failed with statuscode: 404
  Normal   Killing    17s                  kubelet, k8snode2  Container liveness-demo failed liveness probe, will be restarted
  Normal   Pulled     17s                  kubelet, k8snode2  Container image "nginx:1.12-alpine" already present on machine
  Normal   Created    16s (x2 over 6m23s)  kubelet, k8snode2  Created container liveness-demo
  Normal   Started    16s (x2 over 6m23s)  kubelet, k8snode2  Started container liveness-demo
[root@k8smaster chapter4]# kubectl exec -it liveness-http -- /bin/sh  #Looking at the check file again, we found that it was created again.
/ # cd /usr/share/nginx/html/
/usr/share/nginx/html # ls
50x.html    healthz     index.html
/usr/share/nginx/html # exit
[root@k8smaster chapter4]# kubectl delete -f liveness-http.yaml 
pod "liveness-http" deleted
ReadinessProbe for Readiness State Detection:
ExecAction instance:
[root@k8smaster chapter4]# cat readiness-exec.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    test: readiness-exec
  name: readiness-exec
spec:
  containers:
  - name: readiness-demo
    image: busybox
    args: ["/bin/sh", "-c", "while true; do rm -f /tmp/ready; sleep 30; touch /tmp/ready; sleep 300; done"] 
    readinessProbe:
      exec:
        command: ["test", "-e", "/tmp/ready"]
      initialDelaySeconds: 5
      periodSeconds: 5
[root@k8smaster chapter4]# kubectl apply -f readiness-exec.yaml 
pod/readiness-exec created
[root@k8smaster chapter4]# kubectl get pods 
NAME                                READY   STATUS    RESTARTS   AGE
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d
mypod                               1/1     Running   1          2d1h
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d
pod-demo                            2/2     Running   2          2d2h
readiness-exec                      1/1     Running   0          57s
root@k8smaster chapter4]# kubectl exec readiness-exec -- rm -f /tmp/ready
[root@k8smaster chapter4]# kubectl get pods 
NAME                                READY   STATUS    RESTARTS   AGE
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d
mypod                               1/1     Running   1          2d1h
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d
pod-demo                            2/2     Running   2          2d2h
readiness-exec                      0/1     Running   0          2m46s
[root@k8smaster chapter4]# kubectl exec readiness-exec -- touch /tmp/ready
[root@k8smaster chapter4]# kubectl get pods 
NAME                                READY   STATUS    RESTARTS   AGE
myapp-deployment-558f94fb55-plk4v   1/1     Running   2          35d
myapp-deployment-558f94fb55-rd8f5   1/1     Running   2          35d
myapp-deployment-558f94fb55-zzmpg   1/1     Running   2          35d
mypod                               1/1     Running   1          2d1h
nginx-deployment-6f77f65499-8g24d   1/1     Running   2          35d
pod-demo                            2/2     Running   2          2d2h
readiness-exec                      1/1     Running   0          3m54s
Container restart policy:

_Pod objects may be terminated due to container program crashes or resource requests exceeding limits. Whether it should be rebuilt at this time depends on the definition of its restart policy attribute.

  • Always: But restart the Pod object whenever it terminates. This is the default setting.
  • OnFailure: Restart a Pod object only if it has an error.
  • Never: Never restart.
Termination of Pod:

  • The user requests to delete a Pod and submit it to the API Server.
  • API Server records deletion information in etcd, but for grace reasons, deletion is not done immediately.
  • After returning to the API Server, the API Server marks the Pod as a Terminating state and notifies the corresponding node of its kubelet.
  • kubelet communicates with the docker engine, the docker engine terminates the corresponding container, and executes the contents of the pre-stop before the main container terminates. After running, the Docker engine stops the container and returns it to the API Server.
  • API Server then notifies the endpoint controller (EndPoint Controller), which removes the Pod's information from the endpoint list, and then notifies the container process that if the grace period is exceeded, a KILL signal is sent to kill the container, which is then fed back to the API Server.
Restrict permissions on Pod (security context):

_The purpose of the Security Context is to limit the behavior of untrusted containers and to protect the system or other containers within the system from being affected by them.

K8s provides three ways to configure the Security Context:

  • Container-level Security Context: Applies only to the specified container.
  • Pod-level Security Context: Applies to all containers and volume s within a Pod.
  • Pod Security Policies: Applies to all Pods and Volume s within a cluster.
Pod resource limit:
Container calculates the quota for resources:
  • CPU is a compressable resource, that is, the resource quota can shrink on demand, while memory (currently) is an uncompressed resource.
  • Measurement of CPU resources:
    • One core corresponds to 1000 microcores, i.e. 1=1000m, 0.5=500m.
  • Measurement of memory resources:
    • The default unit is bytes, or you can use E, P, T, G, M, and K suffix units, or Ei, Pi, Ti, Gi, Mi, and Ki unit suffixes.
Restriction method:
  • priorityClassName: Used to restrict Pod's priority in using operating system resources.
  • resource: Used to limit Pod's use of resources such as cpu, memory, hard disk (upper-lower limit).
    • limits: upper limit.
    • requests: lower limit.

[root@k8smaster chapter4]# cat stress-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: stress-pod
spec:
  containers:
  - name: stress
    image: ikubernetes/stress-ng
    command: ["/usr/bin/stress-ng", "-c 1", "-m 1", "--metrics-brief"]
    resources:
      requests:
        memory: "128Mi"
        cpu: "200m"
      limits:
        memory: "512Mi"
        cpu: "400m"
[root@k8smaster chapter4]# cat memleak-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: memleak-pod 
spec:
  containers:
  - name: simmemleak
    image: saadali/simmemleak
    resources:
      requests:
        memory: "64Mi"
        cpu: "1"
      limits:
        memory: "64Mi"
        cpu: "1"
[root@k8smaster chapter4]# kubectl apply -f memleak-pod.yaml 
pod/memleak-pod created
[root@k8smaster chapter4]# kubectl get pods
NAME                                READY   STATUS              RESTARTS   AGE
memleak-pod                         0/1     ContainerCreating   0          6s
myapp-deployment-558f94fb55-plk4v   1/1     Running             3          39d
myapp-deployment-558f94fb55-rd8f5   1/1     Running             3          39d
myapp-deployment-558f94fb55-zzmpg   1/1     Running             3          39d
mypod                               1/1     Running             2          6d
nginx-deployment-6f77f65499-8g24d   1/1     Running             3          39d
pod-demo                            2/2     Running             4          6d1h
readiness-exec                      1/1     Running             1          3d22h
[root@k8smaster chapter4]# kubectl describe memleak-pod 
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  76s                default-scheduler  Successfully assigned default/memleak-pod to k8snode1
Normal   Pulling    21s (x4 over 75s)  kubelet, k8snode1  Pulling image "saadali/simmemleak"
Normal   Pulled     20s (x4 over 64s)  kubelet, k8snode1  Successfully pulled image "saadali/simmemleak"
Normal   Created    20s (x4 over 64s)  kubelet, k8snode1  Created container simmemleak
Normal   Started    20s (x4 over 64s)  kubelet, k8snode1  Started container simmemleak
Warning  BackOff    9s (x7 over 62s)   kubelet, k8snode1  Back-off restarting failed container     #The boot will fail because we have given too little memory.
[root@k8smaster chapter4]# kubectl describe pod memleak-pod  | grep Reason
  Reason:       CrashLoopBackOff 
  Reason:       OOMKilled     #The startup failed because memory was exhausted.
Type     Reason     Age                    From               Message
Pod quality of service category:

_According to the requests and limits properties of the Pod object, Kubernetest classifies the Pod object into three quality of service (Quality of Service, QoS) categories: BestEffort, Burstable and Guaranteed:

  • Guaranteed: Each Container Sets requests and limits attributes with the same value for CPU resources, and each container sets pod resources with requests and limits attributes with the same value for memory resources to be assigned to this category, which has the highest priority.
  • Burstable: At least one container has the requests property of a CPU or memory resource set, but Pod resources that do not meet the Guaranteed category requirements automatically belong to this category, which has a medium priority.
  • BestEffort: PD resources that do not have requests or limits properties set for any container automatically belong to this category, with their priority being the lowest.

[root@k8smaster chapter4]# kubectl get pods
NAME                                READY   STATUS             RESTARTS   AGE
memleak-pod                         0/1     CrashLoopBackOff   7          13m
myapp-deployment-558f94fb55-plk4v   1/1     Running            3          39d
myapp-deployment-558f94fb55-rd8f5   1/1     Running            3          39d
myapp-deployment-558f94fb55-zzmpg   1/1     Running            3          39d
mypod                               1/1     Running            2          6d
nginx-deployment-6f77f65499-8g24d   1/1     Running            3          39d
pod-demo                            2/2     Running            4          6d1h
readiness-exec                      1/1     Running            1          3d23h
[root@k8smaster chapter4]# kubectl describe pod memleak-pod  | grep QoS
QoS Class:       Guaranteed
[root@k8smaster chapter4]# kubectl describe pod mypod  | grep QoS
QoS Class:       BestEffort
Summary:
apiVersion,kind,metadata,spec,status(Read-only)        #Required fields.

spec:                                                   #Embedded field in spec.
    containers
    nodeSelector
    nodeName
    restartPolicy                                       #Restart policy.
        Always,Never,OnFailure
    containers:
        name
        image
        imagePullPolicy: Alwasy,Never,IfNotPresent      #Pull mirror policy.
        ports:
            name
            containerPort
        livenessProbe
       readinessProbe
       liftcycle
    ExecAction: exec
    TCPSocketAction: tcpSocket
    HTTPGetAction: httpGet

Keywords: Nginx kubelet Kubernetes Docker

Added by cocell on Fri, 13 Dec 2019 05:51:57 +0200