k8s data persistence statefulset data persistence and automatic creation of PV and PVC

One: Statefulset

StatefulSet is designed to solve the problem of stateless services. The corresponding Deployment and ReplicaSet are designed for stateless services, and their scenarios include:
1. Stable persistent storage, i.e. access to the same persistent data after Pod rescheduling, based on PVC
2. Stable network flags, i.e. PodName and HostName remain unchanged after a Pod reschedule, are based on Headless Service (that is, Service without Cluster IP)
3. Ordered deployment, ordered expansion, that is, Pod is ordered, deployed or expanded in the order defined (i.e., from 0 to N-1, all previous Pods must be Running and Ready before the next Pod runs), based on init containers
4. Orderly shrinking, orderly deleting (i.e. from N-1 to 0)

Because statefulset requires that the names of Pods be ordered, each Pod cannot be replaced at will, that is, even after the Pod is rebuilt, the name remains the same.Name each Pod on the back end.

As you can see from the above application scenarios, the StatefulSet consists of the following components:
1. Headless Service (headless-svc: headless service) for defining network flags.Because there is no IP address, it does not have load balancing capabilities.)
2. VolumeeClaimTemplates for creating PersistentVolumes
 3. Define a StatefulSet for a specific application

StatefulSet:Pod controller.
RC, RS, Deployment, DS.Stateless services.
template: Pod s created from templates are in the same state (except name, IP, domain name)
It can be understood that any Pod can be deleted and replaced with a newly generated Pod.

Stateful services: The relevant time in the previous or multiple communications needs to be recorded as the classification standard for the next communication.For example: database services such as MySQL.(The name of a Pod cannot be changed at will.The directories for data persistence are also different, each Pod has its own unique directory for data persistence storage.)

Each Pod - - one PVC - - one PV for each PVC.

Test: Requirements
2. Create a namespace under your own name where all the following resources run.
Running an httpd web service with the statefuset resource requires three Pods, but each Pod has a different main interface content and requires proprietary data persistence. Try deleting one of the Pods to see if the newly generated Pods have the same data as the previous one.

1.Be based on NFS Services, Create NFS Service.

1.[root@master ~]# yum -y install nfs-utils rpcbind  br/>2.[root@master ~]# mkdir /nfsdata  
3.[root@master ~]# vim /etc/exports  
br/>4./nfsdata  *(rw,sync,no_root_squash)  
5.[root@master ~]# systemctl start nfs-server.service   
6.[root@master ~]# systemctl start rpcbind  
br/>7.[root@master ~]# showmount -e  
8.Export list for master:  
9./nfsdata *  

2. Create RBAC permissions
vim rbac-rolebind.yaml

apiVersion: v1
kind: Namespace
metadata: 
  name: lbs-test
apiVersion: v1
    kind: ServiceAccount  Establish rbac Authorized users.And define permissions
metadata:
  name: nfs-provisioner
  name: lbs-test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
  name: lbs-test
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace:  lbs-test            If there is no namespace to add this default Default Otherwise Error
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

Execute yaml file:

1.[root@master yaml]# kubectl apply -f rbac-rolebind.yaml   
2.namespace/lbh-test created  
3.serviceaccount/nfs-provisioner created  
4.clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner created  
5.clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created  

3.Establish Deployment Resource object.
[root@master yaml]# vim nfs-deployment.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  name: lbs-test
spec:
  replicas: 1#Number of copies is 1
  strategy:
    type: Recreate#Reset
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-provisioner#Designated Account
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner This image is used.
          volumeMounts:
            - name: nfs-client-root
              mountPath:  /persistentvolumes#Specify the mount directory within the container
          env:
            - name: PROVISIONER_NAME#Container built-in variables
              value: bdqn#This is the name of the variable
            - name: NFS_SERVER
              value: 192.168.1.1
            - name: NFS_PATH#Specify shared directory for Nfs
              value: /nfsdata
      volumes:#Specify nfs path and IP mounted in container
        - name: nfs-client-root
          nfs:
            server: 192.168.1.1
            path: /nfsdata

Execute the yaml file and view the Pod: br/>1.[root@master yaml]# kubectl apply -f nfs-deployment.yaml   
2.deployment.extensions/nfs-client-provisioner created   
br/>3.[root@master yaml]# kubectl get pod -n lbs-test   
4.NAME                                      READY   STATUS    RESTARTS   AGE  
5.nfs-client-provisioner-5d88975f6d-wdbnc   1/1     Running   0          13s  

4.Establish Storageclass Resource object ( sc): 
root@master yaml]# vim sc.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: sc-nfs
  namespace: lbs-test #Namespace name
provisioner: lbs-test#Same value as env environment variable of deployment resource
reclaimPolicy: Retain #Recycling Policy

Execute the yaml file and view the SC: br/>1.[root@master yaml]# kubectl apply -f sc.yaml   
2.storageclass.storage.k8s.io/sc-nfs created  
br/>3.[root@master yaml]# kubectl get sc -n lbs-test   
4.NAME     PROVISIONER   AGE  
5.sc-nfs   lbs-test      8s  

5. Create StatefulSet resource object, automatically create PVC:

vim statefulset.yaml
apiVersion: v1
kind: Service
metadata:
  name: headless-svc
  namespace: lbs-test
  labels:
    app: headless-svc
spec:
  ports:
  - port: 80
    name: myweb
  selector:
    app: headless-pod
  clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: statefulset-test
  namespace: lbs-test
spec:
  serviceName: headless-svc
  replicas: 3
  selector:
    matchLabels:
      app: headless-pod
  template:
    metadata:
      labels:
        app: headless-pod
    spec:
      containers:
      - image: httpd
        name: myhttpd
        ports:
        - containerPort: 80
          name: httpd
        volumeMounts:
        - mountPath: /mnt
          name: test
  volumeClaimTemplates:     This field: Auto-create PVC
  - metadata:
      name: test
      annotations:   //This is to specify the storageclass with the same name
        volume.beta.kubernetes.io/storage-class: sc-nfs
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi

Execute the yaml file and view the Pod: br/>1.[root@master yaml]# kubectl apply -f statefulset.yaml   
2.service/headless-svc created  
br/>3.statefulset.apps/statefulset-test created  
4.[root@master yaml]# kubectl get pod -n lbs-test   
5.NAME                                      READY   STATUS    RESTARTS   AGE  
6.nfs-client-provisioner-5d88975f6d-wdbnc   1/1     Running   0          22m  
7.statefulset-test-0                        1/1     Running   0          8m59s  
8.statefulset-test-1                        1/1     Running   0          2m30s  
9.statefulset-test-2                        1/1     Running   0          109s  

**Check to see if auto-created PV and PVC**
PV: 
1.[root@master yaml]# kubectl get pv -n lbs-test   
2.NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                              STORAGECLASS   REASON   AGE  
3.pvc-0454e9ad-892f-4e39-8dcb-79664f65d1e5   100Mi      RWO            Delete           Bound    lbh-test/test-statefulset-test-2   sc-nfs                  4m23s  
4.pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5   100Mi      RWO            Delete           Bound    lbh-test/test-statefulset-test-0   sc-nfs                  11m  
5.pvc-99137753-ccd0-4524-bf40-f3576fc97eba   100Mi      RWO            Delete           Bound    lbh-test/test-statefulset-test-1   sc-nfs                  5m4s  

PVC: 
1.[root@master yaml]# kubectl get pvc -n lbs-test   
2.NAME                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE  
3.test-statefulset-test-0   Bound    pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5   100Mi      RWO            sc-nfs         13m  
4.test-statefulset-test-1   Bound    pvc-99137753-ccd0-4524-bf40-f3576fc97eba   100Mi      RWO            sc-nfs         6m42s  
5.test-statefulset-test-2   Bound    pvc-0454e9ad-892f-4e39-8dcb-79664f65d1e5   100Mi      RWO            sc-nfs         6m1s  

See if you want to create a persistent directory:
1.[root@master yaml]# ls /nfsdata/  
2.lbh-test-test-statefulset-test-0-pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5  
3.lbh-test-test-statefulset-test-1-pvc-99137753-ccd0-4524-bf40-f3576fc97eba  
4.lbh-test-test-statefulset-test-2-pvc-0454e9ad-892f-4e39-8dcb-79664f65d1e5  

6. Create data within pod resources.And access the tests.

1.[root@master yaml]# cd /nfsdata/  
2.[root@master nfsdata]# echo 111 > lbs-test-test-statefulset-test-0-pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5/index.html  
3.[root@master nfsdata]# echo 222 > lbs-test-test-statefulset-test-1-pvc-99137753-ccd0-4524-bf40-f3576fc97eba/index.html  
4.[root@master nfsdata]# echo 333 > lbs-test-test-statefulset-test-2-pvc-0454e9ad-892f-4e39-8dcb-79664f65d1e5/index.html  
5.[root@master nfsdata]# kubectl get pod -o wide -n lbs-test   
6.NAME                                      READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES  
7.nfs-client-provisioner-5d88975f6d-wdbnc   1/1     Running   0          30m     10.244.2.2   node02   <none>           <none>  
8.statefulset-test-0                        1/1     Running   0          17m     10.244.1.2   node01   <none>           <none>  
9.statefulset-test-1                        1/1     Running   0          10m     10.244.2.3   node02   <none>           <none>  
10.statefulset-test-2                        1/1     Running   0          9m57s   10.244.1.3   node01   <none>           <none>  
11.[root@master nfsdata]# curl 10.244.1.2  
12.111  
13.[root@master nfsdata]# curl 10.244.2.3  
14.222  
15.[root@master nfsdata]# curl 10.244.1.3  
16.333  
7.Delete one of them pod,View this pod Will the resource's data**Re-create and exist.**
1.[root@master ~]# kubectl get pod -n lbs-test   
2.NAME                                      READY   STATUS    RESTARTS   AGE  
3.nfs-client-provisioner-5d88975f6d-wdbnc   1/1     Running   0          33m  
4.statefulset-test-0                        1/1     Running   0          20m  
5.statefulset-test-1                        1/1     Running   0          13m  
6.statefulset-test-2                        1/1     Running   0          13m  
7.[root@master ~]# kubectl delete pod -n lbs-test statefulset-test-0   
8.pod "statefulset-test-0" deleted  
**9.  Deleted and recreated pod Resources**
10.[root@master ~]# kubectl get pod -n lbs-test -o wide  
11.NAME                                      READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES  
12.nfs-client-provisioner-5d88975f6d-wdbnc   1/1     Running   0          35m   10.244.2.2   node02   <none>           <none>  
13.statefulset-test-0                        1/1     Running   0          51s   10.244.1.4   node01   <none>           <none>  
14.statefulset-test-1                        1/1     Running   0          15m   10.244.2.3   node02   <none>           <none>  
15.statefulset-test-2                        1/1     Running   0          14m   10.244.1.3   node01   <none>           <none>  
**The data still exists.**
16.[root@master ~]# curl 10.244.1.4  
17.111  
18.[root@master ~]# cat /nfsdata/lbs-test-test-statefulset-test-0-pvc-2cb98c60-977f-4f3b-ba97-b84275f3b9e5/index.html   
19.111  

StatefulSet resource object, data persistence test for stateful services completed.
Testing has allowed access to previously persisted data even after the Pod is deleted and the schedule is regenerated.

Keywords: curl vim network Database

Added by wshost on Tue, 25 Feb 2020 05:09:26 +0200