catalogue
CSI storage capability detection
Persistent volume snapshot and recovery feature validation
background
Kubernetes (K8S) The Container Storage Interface (CSI) was upgraded to GA status in K8S v1.13 at the end of 2018. It provides a standard persistent volume (PV) configuration interface for deploying stateful applications, such as databases, in the production environment.
The volume snapshot interface has been upgraded to GA status (v1.17 - Beta) in version K8S v1.20. This function helps users integrate snapshots, a very important enterprise storage capability, with standard interfaces, and provides technical support for ensuring the data security of core applications.
problem
CSI exposes the available storage capacity in the underlying storage system through standard interfaces. However, in the actual deployment process, different K8S distributions superimpose different storage system capabilities, which makes it more complex to configure and use persistent volumes through K8S. Operation and maintenance personnel often need to manually identify different storage capabilities to find storage instances that meet the application.
improvement
This paper introduces a related open source tool: kubestr, which is used to discover and verify CSI storage capacity in K8S environment to facilitate daily operation and maintenance.
install
[root@remote-dev ~] wget https://github.com/kastenhq/kubestr/releases/download/v0.4.17/kubestr-v0.4.17-linux-amd64.tar.gz [root@remote-dev ~] tar -xvf kubestr-v0.4.17-linux-amd64.tar.gz [root@remote-dev ~] ./kubestr --help kubestr is a tool that will scan your k8s cluster and validate that the storage systems in place as well as run performance tests. Usage: kubestr [flags] kubestr [command] Available Commands: csicheck Runs the CSI snapshot restore check fio Runs an fio test help Help about any command Flags: -h, --help help for kubestr -o, --output string Options(json) Use "kubestr [command] --help" for more information about a command.
CSI storage capability detection
In the example, run. / kubestr to find the storage provider and the corresponding storage class in the K8S cluster
about rook-ceph It can be found that in addition to the basic PVC/PV capability, it also supports a variety of additional storage functions Raw Block, Snapshot, Expansion, Topology, Cloning
[root@remote-dev ~] ./kubestr ************************************** _ ___ _ ___ ___ ___ _____ ___ | |/ / | | | _ ) __/ __|_ _| _ \ | ' <| |_| | _ \ _|\__ \ | | | / |_|\_\\___/|___/___|___/ |_| |_|_\ Explore your Kubernetes storage options ************************************** Kubernetes Version Check: Valid kubernetes version (v1.19.5) - OK RBAC Check: Kubernetes RBAC is enabled - OK Aggregated Layer Check: The Kubernetes Aggregated Layer is enabled - OK W0720 17:07:36.588200 10462 warnings.go:70] storage.k8s.io/v1beta1 CSIDriver is deprecated in v1.19+, unavailable in v1.22+; use storage.k8s.io/v1 CSIDriver Available Storage Provisioners: fuseim.pri/ifs: Unknown driver type. Storage Classes: * managed-nfs-storage To perform a FIO test, run- ./kubestr fio -s <storage class> rook-ceph.rbd.csi.ceph.com: This is a CSI driver! (The following info may not be up to date. Please check with the provider for more information.) Provider: Ceph RBD Website: https://github.com/ceph/ceph-csi Description: A Container Storage Interface (CSI) Driver for Ceph RBD Additional Features: Raw Block, Snapshot, Expansion, Topology, Cloning Storage Classes: * rook-ceph-block Volume Snapshot Classes: * csi-rbdplugin-snapclass To perform a FIO test, run- ./kubestr fio -s <storage class> To test CSI snapshot/restore functionality, run- ./kubestr csicheck -s <storage class> -v <volume snapshot class>
Persistent volume snapshot and recovery feature validation
In the example ./ kubestr csicheck - s < storage class> - v < snapshot class> Command, kubestr creates PVC/PV based on the specified storage class, and verifies whether the snapshot and recovery functions work normally in combination with the snapshot class.
[root@remote-dev ~] ./kubestr csicheck -s rook-ceph-block -v csi-rbdplugin-snapclass Creating application -> Created pod (kubestr-csi-original-podr22q8) and pvc (kubestr-csi-original-pvczbvjv) Taking a snapshot -> Created snapshot (kubestr-snapshot-20210720170801) Restoring application -> Restored pod (kubestr-csi-cloned-podvvm58) and pvc (kubestr-csi-cloned-pvcghp45) Cleaning up resources CSI checker test: CSI application successfully snapshotted and restored. - OK
Basic performance test
In the example ./ kubestr fio -s <storage class> Command, kubestr creates PVC/PV based on the specified storage class and uses FIO to perform a basic IO performance test on the mounted PVC
[root@remote-dev ~] ./kubestr fio -s rook-ceph-block PVC created kubestr-fio-pvc-nkscx Pod created kubestr-fio-pod-r8sxj Running FIO test (default-fio) on StorageClass (rook-ceph-block) with a PVC of Size (100Gi) Elapsed time- 43.356058723s FIO test results: FIO version - fio-3.20 Global options - ioengine=libaio verify=0 direct=1 gtod_reduce=1 JobName: read_iops blocksize=4K filesize=2G iodepth=64 rw=randread read: IOPS=1371.392700 BW(KiB/s)=5502 iops: min=1176 max=1668 avg=1381.166626 bw(KiB/s): min=4704 max=6672 avg=5524.666504 JobName: write_iops blocksize=4K filesize=2G iodepth=64 rw=randwrite write: IOPS=705.894043 BW(KiB/s)=2840 iops: min=578 max=850 avg=709.266663 bw(KiB/s): min=2312 max=3400 avg=2837.066650 JobName: read_bw blocksize=128K filesize=2G iodepth=64 rw=randread read: IOPS=1376.654297 BW(KiB/s)=176745 iops: min=1326 max=1436 avg=1386.633301 bw(KiB/s): min=169728 max=183808 avg=177490.140625 JobName: write_bw blocksize=128k filesize=2G iodepth=64 rw=randwrite write: IOPS=747.651184 BW(KiB/s)=96232 iops: min=622 max=864 avg=750.533325 bw(KiB/s): min=79616 max=110592 avg=96074.765625 Disk stats (read/write): rbd0: ios=47346/25116 merge=328/665 ticks=2157326/2151962 in_queue=2189862, util=99.720703% - OK