service exposes the port and proxy
service concept
SVC matches a group of pods through Label Selector label selection to access services externally. Each SVC can be understood as a micro service.
service can provide load balancing, but it has the following limitations:
It only provides layer 4 load balancing capability (only RR polling algorithm) without layer 7 function. If more matching rules are needed to forward requests, layer 4 load balancing is not supported.
service type
Clusterip: the default type. It automatically assigns a virtual IP that can only be accessed within the Cluster. It is generally used for internal load balancing of the Cluster.
Nodeport (Service exposed): on the basis of ClusterIP, bind a mapping port for the Service on each machine. Internet clients can access it through NodeIP and nodeport.
LoadBalancer (service exposed): on the basis of NodePort, create an external load balancer with the help of cloud provider, and forward the request to NodeIP and NodePort
ExternalName: the services outside the cluster are introduced into the cluster and used directly inside the cluster. No type of proxy has been created, and only Kube DNS after version 1.7 supports it.
SVC access process components
First, apiserver listens to Kube proxy to discover services and ports.
Through Kube proxy monitoring, Kube proxy monitors all Pod node information, labels, IP, port s, etc., and is responsible for writing them into iptables rules.
When a client accesses SVC, it actually accesses iptables rules. Iptables then leads to the backend Pod node.
ipvs, not shown in the picture. Cluster IP and NodePort adopt ipvs scheduling algorithm to schedule access to backend Pod nodes.
ipvs proxy mode
ipvs (IP Virtual Server) realizes transport layer load balancing, that is, we often say 4-layer LAN switching, as a part of the Linux kernel. ipvs runs on the host and acts as a load balancer in front of the real server cluster. ipvs can forward the service requests based on TCP and UDP to the real server, and make the services of the real server appear as virtual services on a single IP address.
ipvs compared with iptables, we know that Kube proxy supports iptables and ipvs modes. In kubernetes v1 ipvs mode is introduced in v1.8 9 is in beta, in V1 11 has been officially available. Iptables mode in V1 Support was added from v1.1 Since version 2, iptables is the default operation mode of Kube proxy. ipvs and iptables are based on netfilter. What are the differences between ipvs and iptables?
- ipvs provides better scalability and performance for large clusters
- ipvs supports more complex replication balancing algorithms (minimum load, minimum connections, weighting, etc.) than iptables
- ipvs supports server health check and connection retry
ipvs provides algorithms for load balancing:
- rr: polling band
- lc: minimum number of connections
- dh: destination address hash
- sh: source address hash
- sed: minimum expected delay
- nq: no queuing scheduling
client Pod accessing Server Pod
- First, iptables proxy of service IP becomes ipvs module to realize load balancing and traffic guidance.
- The client accesses the IPvs service and distributes the traffic to different pods for operation.
How kubernetes exposes ports
Access within the cluster: Clusterip
Clusterip is a private ip within the cluster. It is very convenient to access services within the cluster. It is also the default method of kuberentes cluster. It can be accessed directly through service clusterip or ServiceName. It is inaccessible outside the cluster.
By default, a virtual IP that can only be accessed inside the Cluster is automatically assigned
[root@master ~]# cat cluster.yml --- apiVersion: apps/v1 kind: Deployment metadata: name: apache namespace: default spec: replicas: 1 selector: matchLabels: app: apache template: metadata: labels: app: apache spec: containers: - image: bravealove1/apache:v1.0 imagePullPolicy: IfNotPresent name: apache --- apiVersion: v1 kind: Service metadata: name: apache namespace: default spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: apache type: ClusterIP #Specify network mode [root@master ~]# kubectl apply -f cluster.yml deployment.apps/apache created service/apache created [root@master ~]# kubectl get pods,svc NAME READY STATUS RESTARTS AGE pod/apache-599bc546b8-xp5rg 1/1 Running 0 12s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/apache ClusterIP 10.98.249.13 <none> 80/TCP 12s service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d12h [root@master ~]# curl 10.98.249.13 hello,this is a test page 1
Cluster external access: NodePort
nodePort is an early and widely used service exposure method in kubenretes. By default, all services in Kubernetes use the ClusterIP type. Such a service will generate a ClusterIP, which can only be accessed inside the cluster. To enable external access to the service directly, you need to modify the service type to nodePort. Map the service listening port to the node node.
The principle of nodePort is to open a port on the node, import the traffic to the port to Kube proxy, and then go further from Kube proxy to the corresponding pod
--- apiVersion: apps/v1 kind: Deployment metadata: name: apache namespace: default spec: replicas: 1 selector: matchLabels: app: apache template: metadata: labels: app: apache spec: containers: - image: aimmi/apache:v1.0 imagePullPolicy: IfNotPresent name: apache --- apiVersion: v1 kind: Service metadata: name: apache namespace: default spec: ports: - port: 80 protocol: TCP targetPort: 80 nodePort: 30000 Specify external port selector: app: apache type: NodePort #Specify network mode [root@master ~]# kubectl apply -f nodeport.yml deployment.apps/apache created service/apache created [root@master ~]# kubectl get pods,svc NAME READY STATUS RESTARTS AGE pod/apache-599bc546b8-w6xnd 1/1 Running 0 10s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/apache NodePort 10.96.90.92 <none> 80:30000/TCP 10s service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d12h [root@master ~]# curl 192.168.145.188:30000 hello,this is a test page 1
DNS resolution
Service
- Creating a normal service will start with my SVC my-namespace. svc. cluster. Assign a DNS A record in the form of local and resolve it to the Cluster IP of the service.
- Creating a "Headless" Service (without Cluster IP) will also be assigned a DNS A record in the form of my-svc.my-namespace.svc.cluster.local, but it will not resolve to the Cluster IP, but to a group of selected - pod IP. If there is no backend, it will not be processed.
Pod
- Creating a Pod will start with Pod IP address my-namespace. Pod. cluster. Local is assigned a DNS A record in this form.
case
[root@master ~]# cat deploy.yml --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 parameter--record You can record the current version Deployment What commands have you executed [root@master ~]# kubectl create -f deploy.yml --record Flag --record has been deprecated, --record will be removed in the future deployment.apps/nginx-deployment created Execute immediately after creation get Command view this Deployment [root@master ~]# kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 3/3 3 3 2m14s [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-deployment-74d589986c-kxcvx 1/1 Running 0 2m11s nginx-deployment-74d589986c-s277p 1/1 Running 0 2m11s nginx-deployment-74d589986c-zlf8v 1/1 Running 0 2m11s use nginx:1.9.1 Instead of the original image nginx Mirror image of [root@master ~]# kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1 deployment.apps/nginx-deployment image updated View update progress [root@master ~]# kubectl rollout status deployment/nginx-deployment Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination... deployment "nginx-deployment" successfully rolled out View image information [root@master ~]# kubectl describe deployment/nginx-deployment Name: nginx-deployment Namespace: default CreationTimestamp: Fri, 24 Dec 2021 22:24:10 +0800 Labels: <none> Annotations: deployment.kubernetes.io/revision: 4 kubernetes.io/change-cause: kubectl create --filename=deploy.yml --record=true Selector: app=nginx Replicas: 3 desired | 3 updated | 4 total | 3 available | 1 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=nginx Containers: nginx: Image: nginx:1.9.1 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> use rollout history Command view Deployment Version of( revision) [root@master ~]# kubectl rollout history deployment/nginx-deployment deployment.apps/nginx-deployment REVISION CHANGE-CAUSE 1 kubectl create --filename=deploy.yml --record=true 2 kubectl create --filename=deploy.yml --record=true use rollout undo The command rolls back to the previous revision [root@master ~]# kubectl rollout undo deployment/nginx-deployment deployment.apps/nginx-deployment rolled back View image information [root@master ~]# kubectl describe deployment/nginx-deployment Name: nginx-deployment Namespace: default CreationTimestamp: Fri, 24 Dec 2021 22:24:10 +0800 Labels: <none> Annotations: deployment.kubernetes.io/revision: 3 kubernetes.io/change-cause: kubectl create --filename=deploy.yml --record=true Selector: app=nginx Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=nginx Containers: nginx: Image: nginx Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> The number of replicas for an application is 3 [root@master ~]# kubectl apply -f cluster.yml deployment.apps/apache created [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE pod/apache-599bc546b8-6tctl 1/1 Running 0 28s [root@master ~]# kubectl scale deployment apache --replicas 4 deployment.apps/apache scaled [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE pod/apache-599bc546b8-6gx4p 0/1 ContainerCreating 0 2s pod/apache-599bc546b8-6tctl 1/1 Running 0 2m3s pod/apache-599bc546b8-8qzr4 0/1 ContainerCreating 0 2s pod/apache-599bc546b8-j48v5 0/1 ContainerCreating 0 2s Create a pod,Which runs nginx,redis,memcached 3 Containers [root@master ~]# vim pod.yml --- apiVersion: v1 kind: Pod metadata: name: hellok8s namespace: default labels: app: myapp spec: containers: - name: mynginx image: nginx ports: - containerPort: 80 - name: myredis image: redis ports: - containerPort: 6379 - name: memcached image: memcached [root@master ~]# kubectl apply -f pod.yml pod/hellok8s created [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE hellok8s 3/3 Running 0 2m49s Give one pod establish service,And can pass ClusterlP/NodePort visit [root@master ~]# vi nodeport.yml --- apiVersion: apps/v1 kind: Deployment metadata: name: apache namespace: default spec: replicas: 1 selector: matchLabels: app: apache template: metadata: labels: app: apache spec: containers: - image: bravealove1/apache:v1.0 imagePullPolicy: IfNotPresent name: apache --- apiVersion: v1 kind: Service metadata: name: apache namespace: default spec: ports: - port: 80 protocol: TCP targetPort: 80 nodePort: 30000 Specify external port selector: app: apache type: NodePort Specify network mode [root@master ~]# kubectl apply -f nodeport.yml deployment.apps/apache created service/apache created [root@master ~]# kubectl get pods,svc NAME READY STATUS RESTARTS AGE pod/apache-599bc546b8-w6xnd 1/1 Running 0 10s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/apache NodePort 10.96.90.92 <none> 80:30000/TCP 10s service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d12h [root@master ~]# curl 192.168.145.188:30000 hello,this is a test page 1 establish deployment and service,use busybox container nslookup analysis service [root@master ~]# cat host.yml --- apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deploy namespace: default spec: replicas: 1 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: httpd imagePullPolicy: IfNotPresent --- apiVersion: v1 kind: Service metadata: name: myapp-externalname namespace: default spec: type: ExternalName externalName: web.test.example.com deploy [root@master ~]# kubectl apply -f host.yml deployment.apps/myapp-deploy created service/apache created [root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-deploy-55bd85c8b-9jgf2 1/1 Running 0 107s see pods,service state [root@master ~]# kubectl get pods,svc NAME READY STATUS RESTARTS AGE pod/myapp-deploy-55bd85c8b-9jgf2 1/1 Running 0 18m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d14h service/myapp-externalname ExternalName <none> web.test.example.com <none> 3m35s [root@master ~]# vibusybox.yaml apiVersion: v1 kind: Pod metadata: labels: app: busybox-pod name: test-busybox spec: containers: - command: - sleep - "3600" image: busybox imagePullPolicy: Always name: test-busybox deploy [root@master ~]# kubectl apply -f busybox.yaml pod/test-busybox created see pod state [root@master ~]# kubectl get pods,svc NAME READY STATUS RESTARTS AGE pod/myapp-deploy-55bd85c8b-9jgf2 1/1 Running 0 19m pod/test-busybox 1/1 Running 0 8m15s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d14h service/myapp-externalname ExternalName <none> web.test.example.com <none> 4m49s use exec -it And busybox Interact [root@master ~]# kubectl exec -it test-busybox /bin/sh kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. / # nslookup myapp-externalname.default.svc.cluster.local Server: 10.96.0.10 Address: 10.96.0.10:53 myapp-externalname.default.svc.cluster.local canonical name = web.test.example.com