Basic Introduction to kubernetes and Common Commands of kubectl

Basic Introduction to kubernetes and Common Commands of kubectl

pod classification of k8s

There are two categories of pod s:

  • Autonomous pod
  • Controller managed pod

Autonomous pods are managed by k8s manager, while static pod s are created and managed by kubelet

Autonomous pod

Autonomous pods always run in the foreground and are managed and dispatched by k8s. When a pod in a cluster stops for some reason, k8s regenerates the corresponding pod based on the number of copies it has

A self-managed pod that still needs to be submitted to apiserver after it has been created, which is then dispatched to the specified node with the help of a dispatcher and initiated by the node

If this pod fails, restarting the container is done by kubelet
If the node fails, the pod will disappear. It cannot achieve global scheduling. So this pod is not recommended

Controller managed pod

Common pod controllers:

  • ReplicationController: When a pod is started. This pod can be re-enabled if it is not enough, and the controller will then manage the various copies and objects of the same type of pod. As soon as there are fewer copies, they increase automatically. Take the rule of more refunds and less supplements to meet exactly the expectations we define. Support for rolling updates

  • ReplicaSet: Managed by a declarative update controller called Deployment

  • Deployment:Deployment can only manage stateless applications

  • StateFulSet: Stateful replica set to manage stateful applications

  • DaemonSet: Use DaemonSet if you need to run a copy on each node

Core Primary Key

HPA

Deployment also supports secondary controllers, HPA(HorizontalPodAutoscaler, Horizontal Pod Autoscaler). Normally, we can make sure that there are two pods running on a node. In case user access traffic increases, what should we do if two pods are not enough to carry that amount of access? At this point we should increase the pod resources, how many should we add?

The HPA controller automatically monitors the pod and automatically expands it.

service

If there are two pods and the pod has its life cycle, in case the node where the pod is located is down, the pod should be rebuilt on other nodes. The rebuilt pod is not the same pod as the original pod, but both are running the same service. And each container has its own IP address. The IP address of the container in the reconstructed pod is different from that of the container in the previous pod. So there will be a problem, how can the client access the container in these pods? (will switch to another node to run)

For service discovery, a pod has a life cycle, a pod may leave at any time, and other internal pods may be added at any time. If they provide the same service, clients cannot access these pods by fixed means, because the pods themselves are not fixed, they may be replaced at any time, regardless of host name or IP address. Will always be replaced.

To minimize the complexity of client-to-pod coordination, k8s adds an intermediate layer between each group of pods that provide similar services and their clients, which is fixed and called a service.

As long as a service is not deleted, its address and name are fixed. When a client needs to write on its configuration file to access a service, it no longer needs to be discovered automatically. It just needs to write the name of the service in the configuration file. This service is a scheduler, which can provide a stable access entry and can also act as a reverse proxy. When a service receives a request from a client, it will be proxied over the back-end pod. Once the pod is down, a new pod will be created immediately. The new pod will be immediately associated with the service as one of the available pods on the back-end of the service.

Client programs access services through IP+port or hostname+port. The service associates the back-end pod not by its IP and host name, but by the pod's label selector. As long as the label of the pod created is uniform, it can be recognized by the service regardless of how the IP address and host change. This way, as long as the pod belongs to the tag selector and is within the scope of service management, it will be associated with the service. After the dynamic pod is associated with the service, it will dynamically detect the IP address and port of the pod, and then act as the available service King machine for its own backend to schedule. Therefore, the client's request is sent to the service and the service is proxied to the container in the back-end's real pod to respond.

Service is not a program or a component, it is just a dnat rule of iptables. As an object of k8s, service has its own name, which corresponds to the name of the service, which can be resolved.

AddOns Attachments

dns pod: The first thing you need to do after installing k8s is to deploy a dns pod on the k8s cluster to ensure that the name of each service can be dynamically changed by resolution, including dynamic creation, dynamic deletion, dynamic modification, such as changing the name of the service, dns pod triggers automatically, and the name in the dns resolution record is also changed. If we manually change the service's ip address, it will trigger automatically after the change, and the parsed records in the dns service will be removed. This allows clients to access pod resources directly by accessing the name of the service, which is then resolved by a dedicated dns service in the cluster.

This pod is a pod that k8s needs for its own services, so we call it an infrastructure-level pod object, and they are also called cluster attachments

network model

Three network models

  • Node Network
  • service cluster network
  • pod network

Communication with Pod Node

Before the container starts, a virtual Ethernet interface pair is created for the container, which resembles two ends of a pipeline, one end in the host namespace and the other end in the container namespace, named eth0. The interface in the host namespace is bound to the bridge. The address segment of the bridge assigns IP to the eth0 interface of the container.

Pod communication on different nodes

We already know that containers on one node are connected to the same bridge, so to allow containers running on different nodes to communicate, the bridges of these nodes need to be connected in some way.
The IP addresses of PD across the entire cluster must be unique, and all cross-node bridges must use non-overlapping network address segments to prevent Pod s on different nodes from getting the same IP address, that is, to ensure that there are no IP address conflicts.

When sent to a container on B node, the message is first paired to the bridge through the veth interface, then the physical adapter from the bridge to A node, then transferred to the physical adapter of B node through the network cable, and then to the destination container through the interface pair through the bridge on B node.

Note: The above scenarios are only valid if the nodes are connected to the same gateway without any routing devices between them. Otherwise, the routing device will lose packets due to IP privacy unless routing rules are set. However, as the number of nodes increases, the routing configuration becomes very difficult. So we use SDN (Software Defined Network) technology to simplify such problems, and SDN can ignore the underlying network topology, making it feel like connecting to the same gateway.

Pod and Service

In Pod communication on different nodes, we know that Pod communicates with IP address, but in the cluster of Kubernetes, Pod may be destroyed and created frequently, that is, Pod's IP is not fixed.
To solve this problem, Service provides an abstraction layer for accessing Pods, which provides a single, constant access point resource for a set of functionally identical Pods.
Regardless of how the Pod of the back end changes, Service serves externally as a stable front end.
Service also provides high availability and load balancing, which forwards requests to the correct Pod.

Common commands for kubectl

grammar

kubectl [command] [TYPE] [NAME] [flags]

command: Subcommand
TYPE: Resource Type
NAME: Resource Name
flags: Command parameters

Command Help
kubectl The command's help is detailed. kubectl -h All subcommands are listed, followed by any subcommand -h,Both output detailed help and use cases, and you can always see the help if you encounter problems.

Resource Object
kubectl Most subcommands allow you to specify a resource object to operate on. You can use kubectl api-resources Command Reference

Global parameters
kubectl options Commands can list command parameters that can be used globally
--cluster='': Specify the cluster of command action objects
--context='':  Specify the context of the command action object
-n, --namespace='': Specify the command action object's Namespace

create

Create pod from file or standard output

# Create a pos of deployment type named nginx1 with a mirror image of nginx
[root@master ~]# kubectl create deployment wb1 --image=nginx
deployment.apps/wb1 created
[root@master ~]# kubectl create deployment nginx1 --image=nginx
deployment.apps/nginx1 created
[root@master ~]# kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
nginx1-5c9f6bbd8c-2ng6h   1/1     Running   0          40s

# Create a pos of deployment type, named nginx2, using a mirror image of nginx, replicas is the number of specified creations
[root@master ~]# kubectl create deployment nginx2 --image=nginx --replicas=2
deployment.apps/nginx2 created
[root@master ~]# kubectl get pods
NAME                      READY   STATUS              RESTARTS   AGE
nginx1-5c9f6bbd8c-2ng6h   1/1     Running             0          3m2s
nginx2-85bf7b8976-68q5d   0/1     ContainerCreating   0          42s
nginx2-85bf7b8976-74l6z   1/1     Running             0          42s

run

Run a specified mirrored pod (autonomous pod) in the cluster

# The pod running with run defaults to the pod type
[root@master ~]# kubectl run nginx --image nginx
pod/nginx created
[root@master ~]# kubectl get pods
NAME                   READY   STATUS              RESTARTS   AGE
nginx                  0/1     ContainerCreating   0          11s

# Run a pod called nginx1, using a mirror nginx, specifying the label app=web
[root@master ~]# kubectl get pods
NAME                   READY   STATUS              RESTARTS   AGE
nginx                  0/1     ContainerCreating   0          11s
wb1-5dbfb96758-hhfhb   1/1     Running             0          16m
[root@master ~]# kubectl run nginx1 --image=nginx --labels="app=web"
pod/nginx1 created
[root@master ~]# kubectl get pods
NAME                   READY   STATUS    RESTARTS   AGE
nginx                  1/1     Running   0          2m9s
nginx1                 1/1     Running   0          18s

# Create a few more so that their labels are nginx
[root@master ~]# kubectl run nginx2 --image=nginx --labels="app=web"
pod/nginx2 created
[root@master ~]# kubectl run nginx3 --image=nginx --labels="app=web"
pod/nginx3 created
#Check it out
[root@master ~]# kubectl get pods
NAME                   READY   STATUS    RESTARTS   AGE
nginx                  1/1     Running   0          5m49s
nginx1                 1/1     Running   0          3m58s
nginx2                 1/1     Running   0          73s
nginx3                 1/1     Running   0          43s
# Specifying a label when deleting deletes the pod of the corresponding label
[root@master ~]# kubectl delete pod -l app=web
pod "nginx1" deleted
pod "nginx2" deleted
pod "nginx3" deleted

#Trial run, does not really create run, can specify client/server end run
[root@master ~]# kubectl run web123 --image=nginx --dry-run=client
pod/web123 created (dry run)

# Start a pod and put it in the foreground. If it exits, do not restart it
[root@master ~]# kubectl run -i -t web123 --image=busybox --restart=Never
If you don't see a command prompt, try pressing enter.
/ # ls -l
total 16
drwxr-xr-x    2 root     root         12288 Dec  7 00:20 bin
drwxr-xr-x    5 root     root           380 Dec 19 10:22 dev
drwxr-xr-x    1 root     root            66 Dec 19 10:22 etc
drwxr-xr-x    2 nobody   nobody           6 Dec  7 00:20 home
dr-xr-xr-x  219 root     root             0 Dec 19 10:22 proc
drwx------    1 root     root            26 Dec 19 10:22 root
dr-xr-xr-x   13 root     root             0 Dec 19 10:21 sys
drwxrwxrwt    2 root     root             6 Dec  7 00:20 tmp
drwxr-xr-x    3 root     root            18 Dec  7 00:20 usr
drwxr-xr-x    1 root     root            17 Dec 19 10:22 var

delete

Delete a resource's file name, standard output, resource and name, or resource and label selector

##View existing service s and pod s
[root@master ~]# kubectl get pods,svc
NAME                          READY   STATUS    RESTARTS   AGE
pod/nginx-85b98978db-dgkbp    1/1     Running   0          97m
pod/nginx1-5c9f6bbd8c-2ng6h   1/1     Running   0          11m
pod/nginx2-85bf7b8976-68q5d   1/1     Running   0          9m8s
pod/nginx2-85bf7b8976-74l6z   1/1     Running   0          9m8s
pod/nginx3-59475d8756-l8mcq   1/1     Running   0          7m17s
pod/wb1-5dbfb96758-hhfhb      1/1     Running   0          11m

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        144m
service/nginx        NodePort    10.105.224.204   <none>        80:31753/TCP   97m
#Delete service and pod named nginx
[root@master ~]# kubectl delete deployment,svc nginx
deployment.apps "nginx" deleted
service "nginx" deleted

#View after deletion
[root@master ~]# kubectl get pods,svc
NAME                          READY   STATUS    RESTARTS   AGE
pod/nginx1-5c9f6bbd8c-2ng6h   1/1     Running   0          13m
pod/nginx2-85bf7b8976-68q5d   1/1     Running   0          10m
pod/nginx2-85bf7b8976-74l6z   1/1     Running   0          10m
pod/nginx3-59475d8756-l8mcq   1/1     Running   0          8m50s
pod/wb1-5dbfb96758-hhfhb      1/1     Running   0          13m

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   146m
 

get

Show one or more resources

# View pod s created
[root@master ~]# kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
nginx-85b98978db-dgkbp    1/1     Running   0          90m
nginx1-5c9f6bbd8c-2ng6h   1/1     Running   0          5m2s
nginx2-85bf7b8976-68q5d   1/1     Running   0          2m42s
nginx2-85bf7b8976-74l6z   1/1     Running   0          2m42s
nginx3-59475d8756-l8mcq   1/1     Running   0          51s
wb1-5dbfb96758-hhfhb      1/1     Running   0          5m14s

# View pod s created
[root@master ~]# kubectl get service
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        138m
nginx        NodePort    10.105.224.204   <none>        80:31753/TCP   91m

# View multiple messages separated by','
[root@master ~]# kubectl get service,pod
NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        138m
service/nginx        NodePort    10.105.224.204   <none>        80:31753/TCP   91m

NAME                          READY   STATUS    RESTARTS   AGE
pod/nginx-85b98978db-dgkbp    1/1     Running   0          91m
pod/nginx1-5c9f6bbd8c-2ng6h   1/1     Running   0          5m52s
pod/nginx2-85bf7b8976-68q5d   1/1     Running   0          3m32s
pod/nginx2-85bf7b8976-74l6z   1/1     Running   0          3m32s
pod/nginx3-59475d8756-l8mcq   1/1     Running   0          101s
pod/wb1-5dbfb96758-hhfhb      1/1     Running   0          6m4s

# View Namespace
[root@master ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   139m
kube-node-lease   Active   139m
kube-public       Active   139m
kube-system       Active   139m

# View pod of specified type
[root@master ~]# kubectl get deployment
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
nginx    1/1     1            1           93m
nginx1   1/1     1            1           7m49s
nginx2   2/2     2            2           5m29s
nginx3   1/1     1            1           3m38s
wb1      1/1     1            1           8m1s
[root@master ~]# kubectl get deployment nginx
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           94m

expose

Expose port number, - target-port means expose target port number

Create a service that connects to it at port 80 with container 8000 and accesses 8000 in container with external 80

#Map 80 to 8000 because its type is ClusterIP, which means this service can only be accessed in a cluster; NodePort means it is accessible on the real machine
[root@master ~]# kubectl expose deployment myapp --port 80 --target-port 8000
service/myapp exposed
[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
myapp        ClusterIP   10.110.171.169   <none>        80/TCP         3s
nginx        NodePort    10.111.4.86      <none>        80:30859/TCP   41h

edit

Edit resources defined on the server using the default editor

[root@master ~]# kubectl describe pod nginx
Name:         nginx
Namespace:    default
Priority:     0
Node:         node1.example.com/192.168.235.172
Start Time:   Mon, 20 Dec 2021 22:14:38 +0800
Labels:       app=nginx
   ································          
[root@master ~]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx                     1/1     Running   0          87s


...
  labels:
    app: test		//Change the original nginx to test
  name: nginx
[root@master ~]# kubectl describe pod nginx
...
Labels:       app=test

scale

Expand or shrink the number of Pod s in Deployment, ReplicaSet, Replication Controller, or Job

Set the number of pod copies in nginx to 3

[root@master ~]# kubectl get deployment
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           8m30s
[root@master ~]# kubectl scale --replicas 3 deployment/nginx
deployment.apps/nginx scaled
[root@master ~]# kubectl get deployment
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/3     3            1           8m56s
[root@master ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-6799fc88d8-5tsjt   1/1     Running   0          16s
nginx-6799fc88d8-dwrsh   1/1     Running   0          9m5s
nginx-6799fc88d8-sn82p   1/1     Running   0          15s

// Current number of copies is 3, then expand to 5
root@master ~]# kubectl scale --current-replicas 3 --replicas 5 deployment/nginx
deployment.apps/nginx scaled
[root@master ~]# kubectl get pod
NAME                     READY   STATUS              RESTARTS   AGE
nginx-6799fc88d8-5tsjt   1/1     Running             0          62s
nginx-6799fc88d8-dwrsh   1/1     Running             0          9m51s
nginx-6799fc88d8-jkmln   0/1     ContainerCreating   0          2s
nginx-6799fc88d8-qm5ld   0/1     ContainerCreating   0          2s
nginx-6799fc88d8-sn82p   1/1     Running             0          61s
[root@master ~]# kubectl get deployment
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   4/5     5            4           9m58s

autoscale

Automatically expands, given a range, to automatically increase or decrease access to the business

Set the number of copies of nginx deployment to be at least 1 and at most 5

[root@master ~]# kubectl autoscale --min 1 --max 5 deployment/nginx
horizontalpodautoscaler.autoscaling/nginx autoscaled
[root@master ~]# kubectl get hpa
NAME    REFERENCE          TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
nginx   Deployment/nginx   <unknown>/80%   1         5         0          8s

cluster-info

The display label is kubernetes.io/cluster-service=true's control plane and address of the service. To further debug and diagnose cluster problems, use "kubectl cluster-info dump"

[root@master ~]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.235.179:6443
KubeDNS is running at https://192.168.235.179:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

describe

View details of a specific resource or resource group

//View details of a pod named nginx
[root@master ~]# kubectl describe pod nginx
Name:         nginx-6799fc88d8-5tsjt
Namespace:    default
Priority:     0
Node:         node1.example.com/192.168.235.172
Start Time:   Mon, 20 Dec 2021 22:23:28 +0800
Labels:       app=nginx
              pod-template-hash=6799fc88d8
Annotations:  <none>
Status:       Running
IP:           10.244.1.5
IPs:
  IP:           10.244.1.5
Controlled By:  ReplicaSet/nginx-6799fc88d8
Containers:
  nginx:
    Container ID:   docker://5a331ad8c751b41bfa7fd98f4f73e1c97cbc9f8aa76aada48f0be3fe22c10097
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:9522864dd661dcadfd9958f9e0de192a1fdda2c162a35668ab6ac42b465f0603
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Mon, 20 Dec 2021 22:23:37 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-n67dr (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-n67dr:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-n67dr
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  8m9s  default-scheduler  Successfully assigned default/nginx-6799fc88d8-5tsjt to node1.example.com
  Normal  Pulling    8m8s  kubelet            Pulling image "nginx"
  Normal  Pulled     8m    kubelet            Successfully pulled image "nginx" in 7.583042375s
  Normal  Created    8m    kubelet            Created container nginx
  Normal  Started    8m    kubelet            Started container nginx

logs

Output pod or log of containers in specified resources. If there is only one container in the pod, the container name is optional

// View nginx's log
[root@master ~]# kubectl logs deployment/nginx
Found 5 pods, using pod/nginx-6799fc88d8-dwrsh
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2021/12/20 14:14:43 [notice] 1#1: using the "epoll" event method
2021/12/20 14:14:43 [notice] 1#1: nginx/1.21.4
2021/12/20 14:14:43 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
2021/12/20 14:14:43 [notice] 1#1: OS: Linux 4.18.0-257.el8.x86_64
2021/12/20 14:14:43 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2021/12/20 14:14:43 [notice] 1#1: start worker processes
2021/12/20 14:14:43 [notice] 1#1: start worker process 32
2021/12/20 14:14:43 [notice] 1#1: start worker process 33

attach

Connect to a running container

//Get the output of the running pod nginx and connect to the first container in the pod by default

[root@master ~]# kubectl attach nginx
Defaulting container name to nginx.
Use 'kubectl describe pod/nginx -n default' to see all of the containers in this pod.
If you don't see a command prompt, try pressing enter.

exec

Execute commands in containers

//Run date in the first container of pod/nginx by default and print out
[root@master ~]# kubectl exec deployment/nginx date
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Mon Dec 20 14:38:25 UTC 2021

port-forward

Forward one or more local ports to pod

/Map the 80 ports in the container to the native port

[root@master ~]# kubectl port-forward nginx-6799fc88d8-5tsjt :80
Forwarding from 127.0.0.1:46459 -> 80
Forwarding from [::1]:46459 -> 80

[root@master ~]# curl 127.0.0.1:46459
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@master ~]#

cp

Copy files and directories to or from containers

//Will be local anaconda-ks.cfg files transferred to pod/nginx/tmp directory
[root@master ~]# kubectl cp anaconda-ks.cfg nginx-6799fc88d8-5tsjt:/tmp
[root@master ~]# kubectl exec pod/nginx-6799fc88d8-5tsjt -- ls -l /tmp
total 4
-rw------- 1 root root 1252 Dec 20 14:48 anaconda-ks.cfg

label

Update (add, modify, or delete) label s on resources.

  • Labels must begin with a letter or number and can use letters, numbers, hyphens, dots, and underscores up to 63 characters.
  • If -overwrite is true, you can overwrite the existing label, otherwise attempting to overwrite the label will result in an error.
  • If -resource-version is specified, the update will use this resource version, otherwise the existing resource version will be used.
//Change label
[root@master ~]# kubectl describe deployment/nginx
Name:                   nginx
Namespace:              default
CreationTimestamp:      Mon, 20 Dec 2021 22:14:38 +0800
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=nginx
Replicas:               5 desired | 5 updated | 5 total | 5 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-6799fc88d8 (5/5 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  37m   deployment-controller  Scaled up replica set nginx-6799fc88d8 to 1
  Normal  ScalingReplicaSet  29m   deployment-controller  Scaled up replica set nginx-6799fc88d8 to 3
  Normal  ScalingReplicaSet  28m   deployment-controller  Scaled up replica set nginx-6799fc88d8 to 5

//Append Label
[root@master ~]# kubectl label deployment/nginx user=yaya
deployment.apps/nginx labeled
[root@master ~]# kubectl describe deployment/nginx
Name:                   nginx
Namespace:              default
CreationTimestamp:      Mon, 20 Dec 2021 22:14:38 +0800
Labels:                 app=nginx
                        user=tt

api-resources

Print supported API resources on the server

//View all resources
[root@master ~]# kubectl api-resources
NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
bindings                                       v1                                     true         Binding
componentstatuses                 cs           v1                                     false        ComponentStatus
configmaps                        cm           v1                                     true         ConfigMap

api-versions

Print the supported api version as a group/version on the server

[root@master ~]#  kubectl api-versions
admissionregistration.k8s.io/v1
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1

k8s for rolling updates and rollbacks of versions

// Writing two mirrors with Dockerfile
//Make Mirror 1
[root@master ~]# mkdir httpd
[root@master ~]# cd httpd
[root@master httpd]# vim Dockerfile
[root@master httpd]# cat Dockerfile 
FROM busybox

RUN mkdir  /data && \
    echo "test page on v1" > /data/index.html
ENTRYPOINT ["/bin/httpd","-f","-h","/data"]
[root@master httpd]# docker build -t weixiaoya/httpd:v0.1 .
Sending build context to Docker daemon  2.048kB
Step 1/3 : FROM busybox
latest: Pulling from library/busybox
3cb635b06aa2: Pull complete 
Digest: sha256:b5cfd4befc119a590ca1a81d6bb0fa1fb19f1fbebd0397f25fae164abe1e8a6a
Status: Downloaded newer image for busybox:latest
 ---> ffe9d497c324
Step 2/3 : RUN mkdir  /data &&     echo "test page on v1" > /data/index.html
 ---> Running in bf174265c61d
Removing intermediate container bf174265c61d
 ---> a074d85c6622
Step 3/3 : ENTRYPOINT ["/bin/httpd","-f","-h","/data"]
 ---> Running in e362ffafa0e2
Removing intermediate container e362ffafa0e2
 ---> 104d28f2d58c
Successfully built 104d28f2d58c
Successfully tagged weixiaoya/httpd:v0.1


//Make Mirror 2
[root@master httpd]# vim Dockerfile 
[root@master httpd]# cat Dockerfile 
FROM busybox

RUN mkdir  /data && \
    echo "test page on v2" > /data/index.html
ENTRYPOINT ["/bin/httpd","-f","-h","/data"]

[root@master httpd]# docker build -t weixiaoya/httpd:v2 .
Sending build context to Docker daemon  2.048kB
Step 1/3 : FROM busybox
 ---> ffe9d497c324
Step 2/3 : RUN mkdir  /data &&     echo "test page on v2" > /data/index.html
 ---> Running in aa475f8038dd
Removing intermediate container aa475f8038dd
 ---> 867882b9f918
Step 3/3 : ENTRYPOINT ["/bin/httpd","-f","-h","/data"]
 ---> Running in 4cbc3af592c9
Removing intermediate container 4cbc3af592c9
 ---> e423298d601e
Successfully built e423298d601e
Successfully tagged weixiaoya/httpd:v2


[root@master httpd]# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED              SIZE
weixiaoya/httpd                                                   v2         e423298d601e   About a minute ago   1.24MB
weixiaoya/httpd                                                   v0.1       104d28f2d58c   3 minutes ago        1.24MB
busybox                                                           latest     ffe9d497c324   13 days ago          1.24MB


[root@master ~]# docker push weixiaoya/httpd:v0.1
The push refers to repository [docker.io/weixiaoya/httpd]
0d4853dfdf52: Pushed 
64cac9eaf0da: Mounted from library/busybox 
v0.1: digest: sha256:fb79b8b64543613f2677aeb489451b329ed7b4ccbade1820d9d5205495107f4f size: 734

Based on httpd:v0 with k8s. 1 Mirror Run Mirror 3 pod s

[root@master ~]# kubectl create deploy httpd --image weixiaoya/httpd:v0.1 --replicas 3
deployment.apps/httpd created

[root@master ~]# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
httpd-7649d9b878-5lvf7    1/1     Running   0          8m4s
httpd-7649d9b878-ck6cq    1/1     Running   0          8m4s
httpd-7649d9b878-pkqkk    1/1     Running   0          8m4s

//Expose Port
[root@master ~]# kubectl expose deploy httpd --port 80 --type NodePort
service/httpd exposed

[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
httpd        NodePort    10.111.22.218   <none>        80:31547/TCP   33s
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        14h

[root@master ~]# curl 10.111.22.218
test page on v1

[root@master ~]# curl 192.168.235.179:31547
test page on v1

k8s for version updates

[root@master ~]# kubectl set image deploy/httpd httpd=weixiaoya/httpd:v2
deployment.apps/httpd image updated

//Create a new pod and delete an old pod until the update is complete
[root@master ~]# kubectl get pod
NAME                      READY   STATUS              RESTARTS   AGE
httpd-7649d9b878-5lvf7    1/1     Terminating         0          11m
httpd-7649d9b878-ck6cq    1/1     Running             0          11m
httpd-7649d9b878-pkqkk    1/1     Terminating         0          11m
httpd-cb9c79f99-gfk9z     0/1     ContainerCreating   0          10s
httpd-cb9c79f99-w722f     1/1     Running             0          11s
httpd-cb9c79f99-zcsw5     1/1     Running             0          35s


[root@master ~]# kubectl get pod
NAME                      READY   STATUS        RESTARTS   AGE
httpd-cb9c79f99-gfk9z     1/1     Running       0          101s
httpd-cb9c79f99-w722f     1/1     Running       0          102s
httpd-cb9c79f99-zcsw5     1/1     Running       0          2m6s

[root@master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
httpd        NodePort    10.111.22.218   <none>        80:31547/TCP   4m54s

//Visit
[root@master ~]# curl 10.111.22.218
test page on v2
[root@master ~]# curl 192.168.235.179:31547
test page on v2

RollBACK

[root@master ~]# kubectl rollout undo deploy/httpd
deployment.apps/httpd rolled back

[root@master ~]# kubectl get pod
NAME                      READY   STATUS        RESTARTS   AGE
httpd-7649d9b878-96cnm    1/1     Running       0          8s
httpd-7649d9b878-mq6mh    1/1     Running       0          6s
httpd-7649d9b878-rtmjt    1/1     Running       0          10s
httpd-cb9c79f99-gfk9z     1/1     Terminating   0          3m21s
httpd-cb9c79f99-w722f     1/1     Terminating   0          3m22s
httpd-cb9c79f99-zcsw5     1/1     Terminating   0          3m46s


[root@master ~]# curl 10.111.22.218
test page on v1

[root@master ~]# curl 192.168.235.179:31547
test page on v1

Keywords: Linux Docker Kubernetes

Added by Drannon on Sat, 25 Dec 2021 00:42:34 +0200