1.
Pre-start ideas.
The deployment and simple configuration application of pv&pvc were tested before, which realized the purpose of storing data of pod application into PVC and decoupling with pod.
The previous operation is full manual operation, manual creation of pv, manual creation of pvc, if the cluster pod less, so that the operation can be.
If there are more than 1000 pods in the cluster, each pod needs to use pvc to store data. If you can only create PV and pvc one by one manually, the workload can not be imagined.
If a pod can be created, the user-defined pvc of the pod can be created, and then the cluster can create pv according to the user's pvc requirements to realize dynamic pv & & pvc creation and allocation.
kubernetes supports dynamic creation and allocation of pv&pvc for docking storage.
This is the purpose of this test.
2.
testing environment
In the experimental environment, nfs for storage is deployed and tested simply.
3.
nfs deployment
slightly
Refer to the previous document
Data storage decoupling PV & PVC for pod application
4.
storage classes
Official documents:
https://kubernetes.io/docs/concepts/storage/storage-classes/
kubernetes supports docking storage with storage classes to create dynamic pv&pvc allocation.
kubernetes has built-in support for docking many storage types, such as cephfs,glusterfs and so on, with specific reference to official documents.
The built-in kubernetes does not support docking nfs storage types. External plug-ins are required.
External Plug-in Reference Document:
https://github.com/kubernetes-incubator/external-storage
nfs plug-in configuration document:
https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client
Nfs-client-provider is an external provider of a simple NFS for kubernetes. It does not provide NFS itself. It requires existing NFS servers to provide storage.
5.
nfs Storage Profile
[root@k8s-master1 nfs]# ls class.yaml deployment.yaml rbac.yaml test-claim.yaml test-pod.yaml
5.1
class.yaml
[root@k8s-master1 nfs]# cat class.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME' parameters: archiveOnDelete: "false"
Create a storageclass
kind: StorageClass
The name of the new storage class is managed-nfs-storage.
name: managed-nfs-storage
Provider is translated literally into supplier. In practice, it should refer to the name of the storage class's docking storage class program (personal understanding), which must have the same value as the PROVISIONER_NAME variable of deplotment.yaml.
provisioner: fuseim.pri/ifs
[root@k8s-master1 nfs]# kubectl apply -f class.yaml storageclass.storage.k8s.io "managed-nfs-storage" created
[root@k8s-master1 nfs]# kubectl get storageclass NAME PROVISIONER AGE managed-nfs-storage fuseim.pri/ifs 7s
5.2
deployment.yaml
[root@k8s-master1 nfs]# cat deployment.yaml apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner --- kind: Deployment apiVersion: extensions/v1beta1 metadata: name: nfs-client-provisioner spec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 10.10.10.60 - name: NFS_PATH value: /ifs/kubernetes volumes: - name: nfs-client-root nfs: server: 10.10.10.60 path: /ifs/kubernetes [root@k8s-master1 nfs]#
Create sa with the name nfs-client-provisioner
apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner
The name and image of pod
containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest
The path mounted in pod
volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes
Variables read by pod, where you need to modify the address and path of the cost local nfs
env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 10.10.10.60 - name: NFS_PATH value: /ifs/kubernetes
The address and path of nfs service need to be modified
volumes: - name: nfs-client-root nfs: server: 10.10.10.60 path: /ifs/kubernetes
The modified deployment.yaml file only modifies the address and directory of nfs
[root@k8s-master1 nfs]# cat deployment.yaml apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner --- kind: Deployment apiVersion: extensions/v1beta1 metadata: name: nfs-client-provisioner spec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 192.168.32.130 - name: NFS_PATH value: /mnt/k8s volumes: - name: nfs-client-root nfs: server: 192.168.32.130 path: /mnt/k8s
[root@k8s-master1 nfs]# kubectl apply -f deployment.yaml serviceaccount "nfs-client-provisioner" created deployment.extensions "nfs-client-provisioner" created [root@k8s-master1 nfs]#
[root@k8s-master1 nfs]# kubectl get pod NAME READY STATUS RESTARTS AGE nfs-client-provisioner-65bf6bd464-qdzcj 1/1 Running 0 1m
[root@k8s-master1 nfs]# kubectl describe pod nfs-client-provisioner-65bf6bd464-qdzcj Name: nfs-client-provisioner-65bf6bd464-qdzcj Namespace: default Priority: 0 PriorityClassName: <none> Node: k8s-master3/192.168.32.130 Start Time: Wed, 24 Jul 2019 14:44:11 +0800 Labels: app=nfs-client-provisioner pod-template-hash=65bf6bd464 Annotations: <none> Status: Running IP: 172.30.35.3 Controlled By: ReplicaSet/nfs-client-provisioner-65bf6bd464 Containers: nfs-client-provisioner: Container ID: docker://67329cd9ca608223cda961a1bfe11524f2586e8e1ccba45ad57b292b1508b575 Image: quay.io/external_storage/nfs-client-provisioner:latest Image ID: docker-pullable://quay.io/external_storage/nfs-client-provisioner@sha256:022ea0b0d69834b652a4c53655d78642ae23f0324309097be874fb58d09d2919 Port: <none> Host Port: <none> State: Running Started: Wed, 24 Jul 2019 14:45:52 +0800 Ready: True Restart Count: 0 Environment: PROVISIONER_NAME: fuseim.pri/ifs NFS_SERVER: 192.168.32.130 NFS_PATH: /mnt/k8s Mounts: /persistentvolumes from nfs-client-root (rw) /var/run/secrets/kubernetes.io/serviceaccount from nfs-client-provisioner-token-4n4jn (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: nfs-client-root: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 192.168.32.130 Path: /mnt/k8s ReadOnly: false nfs-client-provisioner-token-4n4jn: Type: Secret (a volume populated by a Secret) SecretName: nfs-client-provisioner-token-4n4jn Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m default-scheduler Successfully assigned default/nfs-client-provisioner-65bf6bd464-qdzcj to k8s-master3 Normal Pulling 2m kubelet, k8s-master3 pulling image "quay.io/external_storage/nfs-client-provisioner:latest" Normal Pulled 54s kubelet, k8s-master3 Successfully pulled image "quay.io/external_storage/nfs-client-provisioner:latest" Normal Created 54s kubelet, k8s-master3 Created container Normal Started 54s kubelet, k8s-master3 Started container [root@k8s-master1 nfs]#
5.3
rbac.yaml
Specify permissions for sa: nfs-client-provider
Nfs-client-provider was created when deployment was deployed.
[root@k8s-master1 nfs]# cat rbac.yaml kind: ServiceAccount apiVersion: v1 metadata: name: nfs-client-provisioner --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io [root@k8s-master1 nfs]#
[root@k8s-master1 nfs]# kubectl apply -f rbac.yaml serviceaccount "nfs-client-provisioner" unchanged clusterrole.rbac.authorization.k8s.io "nfs-client-provisioner-runner" created clusterrolebinding.rbac.authorization.k8s.io "run-nfs-client-provisioner" created role.rbac.authorization.k8s.io "leader-locking-nfs-client-provisioner" created rolebinding.rbac.authorization.k8s.io "leader-locking-nfs-client-provisioner" created [root@k8s-master1 nfs]#
Search
[root@k8s-master1 nfs]# kubectl get clusterrole |grep nfs nfs-client-provisioner-runner 2m [root@k8s-master1 nfs]# kubectl get role |grep nfs leader-locking-nfs-client-provisioner 2m [root@k8s-master1 nfs]# kubectl get rolebinding |grep nfs leader-locking-nfs-client-provisioner 2m [root@k8s-master1 nfs]# kubectl get clusterrolebinding |grep nfs run-nfs-client-provisioner 2m [root@k8s-master1 nfs]#
6.
test
Using the official test-claim.yaml test
[root@k8s-master1 nfs]# cat test-claim.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-claim annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: accessModes: - ReadWriteMany resources: requests: storage: 1Mi
Read and execute the PV and PVC of test.claim.yaml file
[root@k8s-master1 nfs]# kubectl get pv No resources found. [root@k8s-master1 nfs]# kubectl get pvc No resources found. [root@k8s-master1 nfs]#
Read execution
[root@k8s-master1 nfs]# kubectl apply -f test-claim.yaml persistentvolumeclaim "test-claim" created
pv,pvc after execution
[root@k8s-master1 nfs]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-4fb682ac-ade0-11e9-8401-000c29383c89 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 6s [root@k8s-master1 nfs]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-claim Bound pvc-4fb682ac-ade0-11e9-8401-000c29383c89 1Mi RWX managed-nfs-storage 8s [root@k8s-master1 nfs]#
After docking the nfs storage class, users can apply to create pvc, and the system automatically creates pv and binds pvc.
Retrieving the storage directory of nfs server
[root@k8s-master3 k8s]# pwd /mnt/k8s [root@k8s-master3 k8s]# ls default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89 [root@k8s-master3 k8s]#
Retrieving mount directories in pod
[root@k8s-master1 nfs]# kubectl exec -it nfs-client-provisioner-65bf6bd464-qdzcj ls /persistentvolumes default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89 [root@k8s-master1 nfs]#
7.
Using the official test-pod.yaml test
[root@k8s-master1 nfs]# cat test-pod.yaml kind: Pod apiVersion: v1 metadata: name: test-pod spec: containers: - name: test-pod image: gcr.io/google_containers/busybox:1.24 command: - "/bin/sh" args: - "-c" - "touch /mnt/SUCCESS && exit 0 || exit 1" volumeMounts: - name: nfs-pvc mountPath: "/mnt" restartPolicy: "Never" volumes: - name: nfs-pvc persistentVolumeClaim: claimName: test-claim [root@k8s-master1 nfs]#
[root@k8s-master1 nfs]# kubectl apply -f test-pod.yaml pod "test-pod" created
[root@k8s-master1 nfs]# kubectl get pod NAME READY STATUS RESTARTS AGE test-pod 0/1 Completed 0 1m
After pod started, the file SUCCESS was created in the / mnt directory
The pod directory mounted by pvc is / mnt
In the nfs server directory, you can see the SUCCESS file created by test-pod:
[root@k8s-master3 default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89]# pwd /mnt/k8s/default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89 [root@k8s-master3 default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89]# ls SUCCESS
Retrieve nfs-client-provisioner
[root@k8s-master1 nfs]# kubectl exec -it nfs-client-provisioner-65bf6bd464-qdzcj ls /persistentvolumes/default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89 SUCCESS
8.
A question after the test
Delete pod, the data stored in PVC is still there. After deleting pvc, the PVC directory and stored data are lost.
In order to prevent users from operating errors, can you keep a backup?
The answer is yes.
[root@k8s-master1 nfs]# cat class.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME' parameters: archiveOnDelete: "false" [root@k8s-master1 nfs]# cat class.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME' parameters: archiveOnDelete: "false"
archiveOnDelete: "false"
This parameter can be set to false and true.
archiveOnDelete literally means whether to archive when deleting, false means not to archive, that is, delete data, true means archive, that is, rename path.
Modify the test
[root@k8s-master1 nfs]# kubectl get storageclass NAME PROVISIONER AGE managed-nfs-storage fuseim.pri/ifs 1m [root@k8s-master1 nfs]# kubectl describe storageclass Name: managed-nfs-storage IsDefaultClass: No Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"managed-nfs-storage","namespace":""},"parameters":{"archiveOnDelete":"true"},"provisioner":"fuseim.pri/ifs"} Provisioner: fuseim.pri/ifs Parameters: archiveOnDelete=true AllowVolumeExpansion: <unset> MountOptions: <none> ReclaimPolicy: Delete VolumeBindingMode: Immediate Events: <none>
Delete pod,pvc
[root@k8s-master1 nfs]# kubectl get pod NAME READY STATUS RESTARTS AGE test-pod 0/1 Completed 0 6s [root@k8s-master1 nfs]# kubectl get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-5a12cb0e-adeb-11e9-8401-000c29383c89 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 17s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/test-claim Bound pvc-5a12cb0e-adeb-11e9-8401-000c29383c89 1Mi RWX managed-nfs-storage 17s
[root@k8s-master1 nfs]# kubectl delete -f test-pod.yaml pod "test-pod" deleted [root@k8s-master1 nfs]# kubectl delete -f test-claim.yaml persistentvolumeclaim "test-claim" deleted
[root@k8s-master1 nfs]# kubectl get pv,pvc No resources found. [root@k8s-master1 nfs]#
Retrieve the nfs server storage path and automatically backup the file.
[root@k8s-master3 archived-default-test-claim-pvc-5a12cb0e-adeb-11e9-8401-000c29383c89]# pwd /mnt/k8s/archived-default-test-claim-pvc-5a12cb0e-adeb-11e9-8401-000c29383c89 [root@k8s-master3 archived-default-test-claim-pvc-5a12cb0e-adeb-11e9-8401-000c29383c89]# ls SUCCESS
Keep in mind that archive On Delete:true
9.
After deploying nfs storage, users can apply for pvc on their own.
There is no need to manually create applications for pv c.
In fact, it's a little inconvenient. Can you automatically apply for creating a pvc when creating a pod, instead of needing to apply for a pvc before creating a pod and then mount it into a pod?
This is the volume Claim Templates feature in the stateful set.
Let's have a test next time.