Detailed explanation of k8s controller

catalogue

  • Replication Controller and ReplicaSet

  • Stateless service Deployment concept

  • Creation of Deployment

  • Update of Deployment

  • Rollback of Deployment

  • Expansion and contraction of Deployment

  • Deployment update pause and resume

  • Deployment update considerations

  • Stateful application management StatefulSet concept

  • Create a StatefulSet application

Replication Controller and ReplicaSet

Replication Controller (RC) and ReplicaSet (RS) are two simple ways to deploy Pod. In the production environment, Pod is mainly managed and deployed by means of more advanced Deployment.

  • Replication Controller

  • ReplicaSet

Replication Controller

Replication Controller (RC) ensures that the number of Pod replicas reaches the expected expiration value, that is, the number defined by RC. In other words, Replication Controller ensures that a Pod or group of similar pods is always available.

If the existing Pod is greater than the set value, Replication Controller terminates the additional Pod. If it is too small, Replication Controller will start more pods to ensure that the expected value is met. Unlike manually creating pods, pods maintained with Replication Controller are automatically replaced when they fail, are deleted, or are terminated. Therefore, even if the application requires only one Pod, it should be managed using Replication Controller or other means. Replication Controller is similar to the process manager, but instead of monitoring individual processes on a single node, Replication Controller monitors multiple pods on multiple nodes.

An example of defining a Replication Controller is shown below.

apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80

ReplicaSet

ReplicaSet is the next generation Replication Controller that supports set based label selectors. It is mainly used for Deployment coordination to create, delete and update pods. The only difference between ReplicaSet and Replication Controller is that ReplicaSet supports label selectors. In practical applications, although the ReplicaSet can be used alone, it is generally recommended to use Deployment to automatically manage the ReplicaSet, unless the customized Pod does not need to be updated or has other arrangements.

An example of defining a ReplicaSet is as follows:

apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: frontend
matchExpressions:
- {key: tier, operator: In, values: [frontend]}
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# If your cluster config does not include a dns service, then to
# instead access environment variables to find service host
# info, comment out the 'value: dns' line above, and uncomment the
# line below.
# value: env
ports:
- containerPort: 80

Take a look at using Deployment to automatically manage replicasets

[root@k8s-master01 ~]# kubectl get deploy -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
metrics-server 1/1 1 1 2d
[root@k8s-master01 ~]# kubectl get deploy -n kube-system metrics-server -oyaml
message: ReplicaSet "metrics-server-64c6c494dc" has successfully progressed.

View ReplicaSet

[root@k8s-master01 ~]# kubectl get rs -n kube-system
NAME DESIRED CURRENT READY AGE
metrics-server-64c6c494dc 1 1 1 2d

If we change a parameter and make a rolling upgrade, it will regenerate an RS, which can be rolled back, but rc does not support rollback. We generally use advanced functions such as Deployment and daemon set to manage our rc or RS, and then manage our pod through rs

The creation and deletion of Replication Controller and ReplicaSet are not very different from Pod. Replication Controller is almost no longer used in the production environment, and ReplicaSet is rarely used alone. They all use more advanced resources Deployment, daemon set and StatefulSet to manage Pod.

Stateless service Deployment concept

It is used to deploy stateless services, the most commonly used controller. It is generally used to manage and maintain stateless microservices within the enterprise, such as configserver, zuul and springboot. He can manage the Pod of multiple replicas, and realize seamless migration, automatic capacity expansion and contraction, automatic disaster recovery, one click rollback and other functions.

Creation of Deployment

Create manually

[root@k8s-master01 ~]# kubectl create deployment nginx --image=nginx:1.15.2
deployment.apps/nginx created

Export to nginx deploy yaml

[root@k8s-master01 ~]# kubectl get deployment nginx -o yaml > nginx-deploy.yaml

View nginx deploy yaml

[root@k8s-master01 ~]# vim nginx-deploy.yaml

Delete the contents below status and modify the number of copies

replicas: 2 #Number of copies

Update configuration

[root@k8s-master01 ~]# kubectl replace -f nginx-deploy.yaml
deployment.apps/nginx replaced

View copies

[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-66bbc9fdc5-vtk4n 1/1 Running 0 16m
nginx-66bbc9fdc5-x87z5 1/1 Running 0 34s

The above is managed by file, or edit

[root@k8s-master01 ~]# kubectl edit deploy nginx
# Change the number of copies back to 1
replicas: 1

View copies

[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-66bbc9fdc5-vtk4n 1/1 Running 0 19m

see file

[root@k8s-master01 ~]# cat nginx-deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2021-07-22T08:50:24Z"
generation: 1
labels: # Deployment Per se labels
app: nginx
name: nginx
namespace: default
resourceVersion: "1439468"
uid: f6659adb-7b49-48a5-8db6-fbafa6baa1d7
spec:
progressDeadlineSeconds: 600
replicas: 2 # Number of copies
revisionHistoryLimit: 10 # Number of history records retained
selector:
matchLabels:
app: nginx # With below pod of labels It must be consistent, or it can't be managed pod,matching rs,After a new version is created, it cannot be modified. After modification, a new version will be generated rs,Cannot correspond to old label
strategy:
rollingUpdate:# Rolling upgrade strategy
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template: # Parameters of pod
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx:1.15.2
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30

View the labels of deploy

[root@k8s-master01 ~]# kubectl get deploy --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
nginx 1/1 1 1 22m app=nginx

State analysis

[root@k8s-master01 ~]# kubectl get deploy -owide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx 1/1 1 1 40m nginx nginx:1.15.2 app=nginx
  • NAME: Deployment NAME

  • Ready: the status of the Pod and the number of ready

  • UP-TO-DATE: the number of updated replicas that have reached the desired state

  • AVAILABLE: the number of copies that are already AVAILABLE

  • AGE: displays the time the application was running

  • CONTAINERS: container name

  • IMAGES: IMAGES of containers

  • SELECTOR: label of managed Pod

Update of Deployment

The update will be triggered only by modifying the template in the spec

View mirror version

[root@k8s-master01 ~]# kubectl get deploy -oyaml | grep image
- image: nginx:1.15.2
imagePullPolicy: IfNotPresent

Change the image of the deployment and record it

[root@k8s-master01 ~]# kubectl set image deploy nginx nginx=nginx:1.15.3 --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginx image updated

View the rolling update process

[root@k8s-master01 ~]# kubectl rollout status deploy nginx
deployment "nginx" successfully rolled out

Or use describe to view

[root@k8s-master01 ~]# kubectl describe deploy nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 8m37s (x2 over 21h) deployment-controller Scaled up replica set nginx-66bbc9fdc5 to 1
Normal ScalingReplicaSet 8m35s deployment-controller Scaled down replica set nginx-5dfc8689c6 to 0
Normal ScalingReplicaSet 7m41s (x2 over 165m) deployment-controller Scaled up replica set nginx-5dfc8689c6 to 1
Normal ScalingReplicaSet 7m39s (x2 over 165m) deployment-controller Scaled down replica set nginx-66bbc9fdc5 to 0

View rs

[root@k8s-master01 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-5dfc8689c6 1 1 1 165m
nginx-66bbc9fdc5 0 0 0 21h

The strategy of rolling update is to start a new rs, set the number of copies to 1, delete one of the old, and then start a new one

View rolling update policy configuration

[root@k8s-master01 ~]# vim nginx-deploy.yaml 
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdat

Rollback of Deployment

Update deploy image

[root@k8s-master01 ~]# kubectl set image deploy nginx nginx=nginx:787977da --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginx image updated

[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-5dfc8689c6-ww9v4 1/1 Running 0 17m
nginx-7d79b96f68-m94sh 0/1 ErrImagePull 0 12s

View historical versions

[root@k8s-master01 ~]# kubectl rollout history deploy nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
3 kubectl set image deploy nginx nginx=nginx:1.15.3 --record=true
4 kubectl set image deploy nginx nginx=nginx:1.15.3 --record=true
5 kubectl set image deploy nginx nginx=nginx:787977da --record=true

Rollback to previous version

[root@k8s-master01 ~]# kubectl rollout undo deploy nginx
deployment.apps/nginx rolled back

Check the pod and you can see that there is only one left

[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-5dfc8689c6-ww9v4 1/1 Running 0 20m

Make multiple updates

[root@k8s-master01 ~]# kubectl set image deploy nginx nginx=nginx:787977da --record
deployment.apps/nginx image updated
[root@k8s-master01 ~]# kubectl set image deploy nginx nginx=nginx:787977dadaa --record
deployment.apps/nginx image updated
[root@k8s-master01 ~]# kubectl set image deploy nginx nginx=nginx:787977xxxxxdadaa --record
deployment.apps/nginx image updated

View history

[root@k8s-master01 ~]# kubectl rollout history deploy nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
3 kubectl set image deploy nginx nginx=nginx:1.15.3 --record=true
6 kubectl set image deploy nginx nginx=nginx:1.15.3 --record=true
7 kubectl set image deploy nginx nginx=nginx:787977da --record=true
8 kubectl set image deploy nginx nginx=nginx:787977dadaa --record=true
9 kubectl set image deploy nginx nginx=nginx:787977xxxxxdadaa --record=true

View details of the specified version

[root@k8s-master01 ~]# kubectl rollout history deploy nginx --revision=6
deployment.apps/nginx with revision #6
Pod Template:
Labels: app=nginx
pod-template-hash=5dfc8689c6
Annotations: kubernetes.io/change-cause: kubectl set image deploy nginx nginx=nginx:1.15.3 --record=true
Containers:
nginx:
Image: nginx:1.15.3
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>

Rollback to executed version

[root@k8s-master01 ~]# kubectl rollout undo deploy nginx --to-revision=6
deployment.apps/nginx rolled back

View deploy status

[root@k8s-master01 ~]# kubectl get deploy -oyaml

Expansion and contraction of Deployment

Capacity expansion

[root@k8s-master01 ~]# kubectl scale --replicas=3 deploy nginx
deployment.apps/nginx scaled

[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-5dfc8689c6-nhplc 1/1 Running 0 41s
nginx-5dfc8689c6-ww9v4 1/1 Running 0 72m
nginx-5dfc8689c6-xh9l6 1/1 Running 0 41s

Volume reduction

[root@k8s-master01 ~]# kubectl scale --replicas=2 deploy nginx
deployment.apps/nginx scaled

[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-5dfc8689c6-nhplc 1/1 Running 0 2m1s
nginx-5dfc8689c6-ww9v4 1/1 Running 0 73m
nginx-5dfc8689c6-xh9l6 0/1 Terminating 0 2m1s

NAME READY STATUS RESTARTS AGE
nginx-5dfc8689c6-nhplc 1/1 Running 0 2m17s
nginx-5dfc8689c6-ww9v4 1/1 Running 0 73m

Deployment update pause and resume

Using the edit command, you can modify multiple configurations and update them at one time. However, through the set command, the update will be triggered every time. What should I do? You can use the Deployment update pause function

[root@k8s-master01 ~]# kubectl rollout pause deployment nginx
deployment.apps/nginx paused

Use the set command to modify the configuration

[root@k8s-master01 ~]# kubectl set image deploy nginx nginx=nginx:1.15.3 --record
Flag --record has been deprecated, --record will be removed in the future

# Make a second configuration change to add memory CPU to configure
[root@k8s-master01 ~]# kubectl set resources deploy nginx -c nginx --limits=cpu=200m,memory=128Mi --requests=cpu=10m,memory=16Mi
deployment.apps/nginx resource requirements updated

View deploy

[root@k8s-master01 ~]# kubectl get deploy nginx -oyaml
resources:
limits:# Container maximum CPU And content capacity
cpu: 200m
memory: 128Mi
requests: # Minimum CPU and content capacity for container startup
cpu: 10m
memory: 16Mi

Check whether the pod is updated

[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-5dfc8689c6-nhplc 1/1 Running 0 22m
nginx-5dfc8689c6-ww9v4 1/1 Running 0 93m

You can see that the pod is not updated

Update recovery

[root@k8s-master01 ~]# kubectl rollout resume deploy nginx
deployment.apps/nginx resumed

View rs

[root@k8s-master01 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-5475c49ffb 0 0 0 71m
nginx-5dfc8689c6 0 0 0 4h12m
nginx-66bbc9fdc5 0 0 0 23h
nginx-68db656dd8 2 2 2 32s
nginx-799b8478d4 0 0 0 71m
nginx-7d79b96f68 0 0 0 77m

You can see that the rs of nginx was added 32s ago. After the update is restored, you can create a new container

Deployment update considerations

View deploy

[root@k8s-master01 ~]# kubectl get deploy nginx -oyaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "11"
kubernetes.io/change-cause: kubectl set image deploy nginx nginx=nginx:1.15.3
--record=true
creationTimestamp: "2021-07-22T08:50:24Z"
generation: 20
labels:
app: nginx
name: nginx
namespace: default
resourceVersion: "1588198"
uid: f6659adb-7b49-48a5-8db6-fbafa6baa1d7
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10 # Set retention RS old revision If the number of is set to 0, the historical data will not be retained
selector:
matchLabels:
app: nginx
strategy: # Rolling update strategy
rollingUpdate:
maxSurge: 25% # The maximum value that can exceed the expected value Pod Number, optional field, the default is 25%,It can be set as a number or percentage. If the value is 0, then maxUnavailable Cannot be 0
maxUnavailable: 25% # Specifies the maximum unavailable on rollback or update Pod Quantity of, optional field, default 25%,It can be set as a number or percentage. If the value is 0, then maxSurge Can't 0
type: RollingUpdate # The method of updating deployment is RollingUpdate by default. You can specify maxSurge and maxUnavailable
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx:1.15.3
imagePullPolicy: IfNotPresent
name: nginx
resources:
limits:
cpu: 200m
memory: 128Mi
requests:
cpu: 10m
memory: 16Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 2
conditions:
- lastTransitionTime: "2021-07-23T07:48:01Z"
lastUpdateTime: "2021-07-23T07:48:01Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2021-07-23T08:10:50Z"
lastUpdateTime: "2021-07-23T08:10:53Z"
message: ReplicaSet "nginx-68db656dd8" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 20
readyReplicas: 2
replicas: 2
updatedReplicas: 2

. spec.minReadySeconds: optional parameter that specifies the minimum number of seconds that the newly created Pod is considered Ready without any container crash. The default is 0, that is, it is considered available once it is created.

.spec.strategy.type Recreate: delete the old Pod before creating a new one

Stateful application management StatefulSet concept

  • Basic concepts of StatefulSet

  • StatefulSet considerations

StatefulSet (stateful set, abbreviated as sts) is often used to deploy stateful applications that need to be started in order. For example, when containerizing spring cloud projects, Eureka deployment is more suitable for StatefulSet deployment. You can create a unique and fixed identifier for each Eureka instance, and each Eureka instance does not need to configure redundant services, Other Spring Boot applications can be registered directly through Eureka's Headless Service.

  • The resource name of Eureka's statefullset is Eureka, eureka-0, eureka-1, eureka-2

  • Service: headless service without clusterip Eureka SVC

  • Eureka-0.eureka-svc.NAMESPACE_NAME eureka-1.eureka-svc ...

Basic concepts of StatefulSet

StatefulSet is mainly used to manage workload API objects of stateful applications. For example, in the production environment, ElasticSearch cluster, MongoDB cluster or RabbitMQ cluster, Redis cluster, Kafka cluster and ZooKeeper cluster that need persistence can be deployed.

Similar to Deployment, a stateful set also manages pods based on the same container specification. The difference is that StatefulSet maintains a sticky identifier for each Pod. These pods are created according to the same specification, but they are not interchangeable. Each Pod has a persistent identifier and will be retained during rescheduling. The general format is statefulsetname number. For example, if you define a stateful set with the name redis sentinel and specify to create three pods, the created Pod names are Redis-Sentinel-0, Redis-Sentinel-1 and Redis-Sentinel-2. The Pod created by StatefulSet generally uses Headless Service for communication. The difference between Headless Service and ordinary Service is that Headless Service does not have ClusterIP. It uses Endpoint for mutual communication. The general format of Headless is:

statefulSetName-{0..N-1}.serviceName.namespace.svc.cluster.local
  • serviceName is the name of Headless Service. When creating StatefulSet, you must specify the name of Headless Service;

  • 0 N-1 is the serial number of Pod, starting from 0 to n-1;

  • statefulSetName is the name of StatefulSet;

  • Namespace is the namespace where the service is located;

  • .cluster.local is Cluster Domain.

If a project of the company needs to deploy a Redis in master-slave mode in Kubernetes, it is very appropriate to use stateful set deployment at this time, because when stateful set is started, the latter container will be scheduled only when the current container is fully started, and the identifier of each container is fixed, then the role of the current Pod can be determined through the identifier.

For example, a stateful set named redis-ms is used to deploy Redis in the Master-Slave architecture. When the first container is started, its identifier is redis-ms-0, and the host name in the Pod is redis-ms-0. At this time, it can be judged according to the host name. When the container named redis-ms-0 is used as the Master node of Redis, the other Slave nodes, Then the Master's Headless Service that will not be changed can be used when the Slave connects to the Master's host configuration. At this time, the Redis Slave node (Slave) configuration file is as follows:

port 6379
slaveof redis-ms-0.redis-ms.public-service.svc.cluster.local 6379
tcp-backlog 511
timeout 0
tcp-keepalive 0
......

Redis-ms-0 redis-ms.public-service.svc.cluster.local is the Headless Service of Redis Master. Under the same namespace, you only need to write redis-ms-0 Redis MS is enough, and the following public service svc. cluster. Local can be omitted.

StatefulSet considerations

Generally, StatefulSet is used for applications with one or more of the following requirements:

  • A stable unique network identifier is required.

  • Persistent data is required.

  • Orderly and elegant deployment and expansion are required.

  • Orderly automatic rolling updates are required.

If the application does not need any stable identifier or orderly Deployment, deletion, or extension, the application should be deployed with a stateless controller, such as Deployment or ReplicaSet.

Stateful set is a beta resource of Kubernetes before version 1.9, and none of Kubernetes before version 1.5.

The storage used by the Pod must be configured by the persistent volume configurator according to the request, or pre configured by the administrator. Of course, the storage can not be configured.

To ensure data security, deleting and scaling the StatefulSet will not delete the volumes associated with the StatefulSet. You can manually and selectively delete PVC and PV

StatefulSet currently uses Headless Service (Headless Service) to be responsible for the network identity and communication of Pod. This service needs to be created in advance.

When deleting a StatefulSet, the termination of Pod is not guaranteed. To realize the orderly and normal termination of Pod in StatefulSet, you can reduce the copy of StatefulSet to 0 before deletion.

Create a StatefulSet application

  • Define a StatefulSet resource file

  • Create a StatefulSet

Define a StatefulSet resource file

[root@k8s-master01 ~]# vim nginx-sts.yaml
# Add the following
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx" # StatefulSet must be configured with a serviceName, which points to an existing service, as defined above
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.2
ports:
- containerPort: 80
name: web
  • kind: Service defines a Headless Service named Nginx, and the created Service format is Nginx-0 Nginx. default. svc. cluster. Local is similar to others. Because no Namespace (Namespace) is specified, it is deployed in default by default.

  • kind: StatefulSet defines a StatefulSet named web. replicas indicates the number of copies of the deployed Pod. This example is 2.

Pod Selector (. spec.selector) must be set in StatefulSet to match its label (. spec.template.metadata.labels). Before version 1.8, if this field (. spec.selector) is not configured, it will be set to the default value. After version 1.8, if no matching Pod Selector is specified, it will cause StatefulSet creation error.

When the StatefulSet controller creates a Pod, it adds a label StatefulSet kubernetes. IO / Pod name, the value of this tag is the name of Pod, which is used to match the Service.

Create a StatefulSet

[root@k8s-master01 ~]# kubectl create -f nginx-sts.yaml
service/nginx created
statefulset.apps/web created

View pod

[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 5s
web-1 1/1 Running 0 3s

View service

[root@k8s-master01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14d
nginx ClusterIP None <none> 80/TCP 52s

Capacity expansion sts

[root@k8s-master01 ~]# kubectl scale --replicas=3 sts web
statefulset.apps/web scaled

View pod

[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 2m21s
web-1 1/1 Running 0 2m19s
web-2 1/1 Running 0 14s

Add busybox and resolve headless service

cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
EOF

Verify StatefulSet

[root@k8s-master01 ~]# kubectl exec -ti busybox -- sh
/ # ls
bin dev etc home proc root sys tmp usr var
/ # nslookup web-0.nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name: web-0.nginx
Address 1: 172.25.244.242 web-0.nginx.default.svc.cluster.local
/ # exit

Get the IP address of the pod

[root@k8s-master01 ~]# kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 0 3m34s 172.25.244.245 k8s-master01 <none> <none>
nginx-68db656dd8-2dv8l 1/1 Running 0 106m 172.25.244.241 k8s-master01 <none> <none>
nginx-68db656dd8-8lcrk 1/1 Running 0 106m 172.25.244.240 k8s-master01 <none> <none>
web-0 1/1 Running 0 9m3s 172.25.244.242 k8s-master01 <none> <none>
web-1 1/1 Running 0 9m1s 172.25.244.243 k8s-master01 <none> <none>
web-2 1/1 Running 0 6m56s 172.25.244.244 k8s-master01 <none> <none>

You can see that it directly resolves the service address into the IP of the pod. Instead of accessing through the service, it accesses directly through the IP, reducing one layer of agents and improving performance. Therefore, it is not necessary to configure clusterIP

  clusterIP: None

Keywords: Kubernetes

Added by mdowling on Fri, 14 Jan 2022 08:28:46 +0200