Mysql for k8s for data persistence

Note the comments in the yaml configuration file, and the indentation of the format

I.
1.Persistent Volume, or PV, is a network storage that has been configured by an administrator in the cluster, equivalent to a storage volume and a storage disk.Storage classes are automatically managed by administrators or by StorageClass.
2.Persistent Volume Claim, or PVC, is a user storage request, which is equivalent to Pod, which consumes node resources, while PVC consumes storage resources. Generally speaking, pv is the total space created, and PVC requests space use from the total space.
3. Access Control Types for PVs:
(1)ReadWriteOnce: Access mode is to mount to a single node read-write only
(2)ReadWriteMany: Access mode is to mount to multiple nodes only in read-write mode
(3)ReadOnlyOnce: Access mode is to mount to a single node only in a read-only manner
4.PV's space recycling strategy (persistentVolumeReclaimPolicy):
(1)Recycle: Clear data automatically.
(2)Retain: Manual recycling by an administrator is required.
(3)Delete: Cloud storage dedicated.
5.PV and PVC are related to each other by accessModes and storageClassName.

2. Simulate the effect of data persistence using MySQL experiment:

**1.First set up NFS Shared Services:**

1.[root@master ~]# yum install -y nfs-utils rpcbind  #Note here that all three install NFS services.
2.[root@master ~]# vim /etc/exports  
3./nfsdata  *(rw,sync,no_root_squash)  
4.[root@master ~]# mkdir /nfsdata  
5.[root@master ~]# systemctl start rpcbind  
6.[root@master ~]# systemctl start nfs-server.service   
7.[root@master ~]# showmount -e  
8.Export list for master:  
9./nfsdata *  
**2.Establish PV Resource object:**

1.[root@master ~]# mkdir yaml  
2.[root@master ~]# cd yaml/  
3.[root@master yaml]# vim nfs-pv.yaml   
4.  
5.apiVersion: v1  
6.kind: PersistentVolume  
7.metadata:  
8.  name: lbh-pv  
9.spec:  
10.  capacity:  
11.    storage: 1Gi  
12.  accessModes:  
13.    - ReadWriteOnce          #Access mode is that a single node can only be mounted read-write.
14.  persistentVolumeReclaimPolicy: Retain  #The PV space recycling strategy is manual recycling.
15.  storageClassName: nfs          #Define the name of the storage class
16.  nfs:  
17.    path: /nfsdata/lbh-pv  
18.    server: 192.168.2.50  
Execute the yaml file to see the status:

1.[root@master yaml]# kubectl apply -f nfs-pv.yaml   
2.persistentvolume/lbh-pv created  
3.[root@master yaml]# kubectl get pv  
4.NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE  
5.lbh-pv 1Gi RWO Retain Available nfs 27s Note the state of PV Available:**

The PV was created successfully.

**3.Establish PVC Resource object:**

1.[root@master yaml]# vim nfs-pvc.yaml   
2.  
3.apiVersion: v1  
4.kind: PersistentVolumeClaim  
5.metadata:  
6.  name: lbh-pvc  
7.spec:  
8.  accessModes:  
9.    - ReadWriteOnce   #The access mode here must be consistent with the PV resource
10.  resources:  
11.    requests:  
12.      storage: 1Gi  
13.  storageClassName: nfs      #Storage class name, must match PV resource
**implement yaml File, View PVC and PV Status:**

1.[root@master yaml]# kubectl apply -f nfs-pvc.yaml   
2.persistentvolumeclaim/lbh-pvc created  
3.[root@master yaml]# kubectl get pv  
4.NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE  
5.lbh-pv   1Gi        RWO            Retain           **Bound **   default/lbh-pvc   nfs                     3m55s  
6.[root@master yaml]# kubectl get pvc  
7.NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE  
8.lbh-pvc  ** Bound**    lbh-pv   1Gi        RWO            nfs            10s  
```Notice if it is related Bound: 

4. Create Deployment resource object, mirrored as mysql:5.6
Node downloads mirror in advance:

1.[root@node01 ~]# docker pull mysql:5.6  
2.[root@node02 ~]# docker pull mysql:5.6  

**Establish Deployment Resource object:**

1.[root@master yaml]# vim mysql.yaml   
2.  
3.apiVersion: extensions/v1beta1  
4.kind: Deployment  
5.metadata:  
6.  name: lbh-mysql  
7.spec:  
8.  selector:  
9.    matchLabels:  
10.      app: mysql  
11.  template:  
12.    metadata:  
13.      labels:  
14.        app: mysql  
15.    spec:  
16.      containers:  
17.      - image: mysql:5.6  
18.        name: mysql  
19.        env:         #Define variables, where the password for the MySQL database is defined
20.        - name: MYSQL_ROOT_PASSWORD  
21.          value: 123.com  
22.        volumeMounts:  
23.        - name: mysql-storage  
24.          mountPath: /var/lib/mysql          #The database's data store directory, which is persisted
25.      volumes:  
26.      - name: mysql-storage  
27.        persistentVolumeClaim:  
28.          claimName: lbh-pvc         #Specify PVC resources.
**implement yaml File, view status:**
1.[root@master yaml]# kubectl apply -f mysql.yaml   
2.deployment.extensions/lbh-mysql created  
3.[root@master yaml]# kubectl get pod  
4.NAME                         READY   STATUS              RESTARTS   AGE  
5.lbh-mysql-59778fd8d6-xhk7h   0/1     ContainerCreating   0          3m7s  
** Containers are always being created at this time and can be troubleshooted in four ways:
(1) Use the kubectl describe command to view the details of the Pod.
(2) Use the kubectl logs command to view Pod's logs, since the container was not created successfully, no logs exist.
(3) View local message logs
 (4) View kubelet's log.**
Use the kubectl describe command to view Pod details:
[root@master yaml]# kubectl describe pod lbh-mysql-59778fd8d6-xhk7h 
Last message:
mount.nfs: mounting 192.168.2.50:/nfsdata/lbh-pv failed, reason given by server: No such file or directory
 As prompted, the specified directory does not exist when mounting the NFS storage directory.
Do a directory creation and review the Pod's status again:

1.[root@master yaml]# mkdir -p /nfsdata/lbh-pv  
2.[root@master yaml]# kubectl get pod  
3.NAME                         READY   STATUS    RESTARTS   AGE  
4.lbh-mysql-59778fd8d6-xhk7h   1/1     Running   0          12m  

Deployment resource created successfully.

**5. Enter MySQL database to create test data.**

1.[root@master yaml]# kubectl exec -it lbh-mysql-59778fd8d6-xhk7h -- mysql -uroot -p123.com  
2.mysql> show databases; #view database.
3.+--------------------+  
4.| Database           |  
5.+--------------------+  
6.| information_schema |  
7.| mysql              |  
8.| performance_schema |  
9.+--------------------+  
10.3 rows in set (0.01 sec)  
11.  
12.mysql> create database lbh; #create database.
13.  
14.mysql> use lbh; #Select the database to use.
15.Database changed  
16.mysql> create table lbh_id (id int(4)); #create table.
17.  
18.mysql> insert lbh_id values(9224); #insert data into the table.
19.  
20.mysql> select * from lbh_id; #view all the data in the table.
21.+------+  
22.| id   |  
23.+------+  
24.| 9224 |  
25.+------+  
26.1 row in set (0.00 sec)  
27.  
28.mysql> exit  

View data locally:
1.[root@master yaml]# ls /nfsdata/lbh-pv/  
2.auto.cnf  ibdata1  ib_logfile0  ib_logfile1  lbh  mysql  performance_schema  

Data exists.
**6.See Pod Which node is running, close the corresponding node kubelet,Check to see if recreate Pod,Created Pod Does the data still exist:**

1.[root@master yaml]# kubectl get pod -o wide  
2.NAME                         READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES  
3.lbh-mysql-59778fd8d6-xhk7h   1/1     Running   0          26m   10.244.1.4   node01   <none>           <none>  
4.  
5.[root@node01 ~]# systemctl stop kubelet.service   
6.  
7.[root@master yaml]# kubectl get pod -o wide -w  
8.lbh-mysql-59778fd8d6-xhk7h   1/1     Running   0          28m   10.244.1.4   node01   <none>           <none>  
9.lbh-mysql-59778fd8d6-xhk7h   1/1     Terminating   0          33m   10.244.1.4   node01   <none>           <none>  
10.lbh-mysql-59778fd8d6-cf6g4   0/1     Pending       0          0s    <none>       <none>   <none>           <none>  
11.lbh-mysql-59778fd8d6-cf6g4   0/1     Pending       0          0s    <none>       node02   <none>           <none>  
12.lbh-mysql-59778fd8d6-cf6g4   0/1     ContainerCreating   0          1s    <none>       node02   <none>           <none>  
13.lbh-mysql-59778fd8d6-cf6g4   1/1     Running             0          2s    10.244.2.9   node02   <none>       
**Pod Re-create successful, enter Pod To see if the data exists:**

1.[root@master yaml]# kubectl get pod -o wide  
2.NAME                         READY   STATUS        RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES  
3.lbh-mysql-59778fd8d6-cf6g4   1/1     Running       0          12s   10.244.2.10   node02   <none>           <none>  
4.lbh-mysql-59778fd8d6-xhk7h   1/1     Terminating   0          44m   10.244.1.4    node01   <none>           <none>  
5.  
6.[root@master yaml]# kubectl exec -it lbh-mysql-59778fd8d6-cf6g4 -- mysql -uroot -p123.com  
7.mysql> show databases;  
8.+--------------------+  
9.| Database           |  
10.+--------------------+  
11.| information_schema |  
12.| lbh                |  
13.| mysql              |  
14.| performance_schema |  
15.+--------------------+  
16.4 rows in set (0.01 sec)  
17.  
18.mysql> use lbh  
19.Database changed  
20.mysql> select * from lbh_id;  
21.+------+  
22.| id   |  
23.+------+  
24.| 9224 |  
25.+------+  
26.1 row in set (0.00 sec)  
The data still exists, check the local files again:
1.[root@master yaml]# ls /nfsdata/lbh-pv/  
2.auto.cnf  ibdata1  ib_logfile0  ib_logfile1  lbh  mysql  performance_schema  

MySQL data persistence complete.

Keywords: MySQL Database vim kubelet

Added by x2fusion on Sat, 15 Feb 2020 22:03:02 +0200