Linux operation and maintenance - disk storage -- 3.LVM

  1. How LVM works

     LVM (Logical Volume Manager) logical volume management is a logical layer added between disk partition and file system to shield the layout of lower disk partition for file system, provide an abstract volume and establish file system on the volume. The administrator can use LVM to dynamically adjust the size of the file system without repartitioning the disk, and the file system managed by LVM can span the disk. When the server adds a new disk, the administrator does not need to move the original file to the new disk, but can directly expand the file system across the disk through LVM.

    It is to encapsulate the underlying physical hard disk and present it to the upper application in the form of logical volume. In LVM, it encapsulates the underlying hard disk. When we operate the underlying physical hard disk, it is no longer for partition operation, but for underlying disk management operation through something called logical volume.
    1.1 terms commonly used in LVM

The physical media: LVM storage media can be disk partition, whole disk, RAID array or SAN disk. The device must be initialized as LVM physical volume to be used with LVM.

Physical volume PV (physical volume): physical volume is the basic storage logical block of LVM, but compared with basic physical storage media (such as partition, disk, etc.), it contains management parameters related to LVM. To create a physical volume, it can use hard disk partition or hard disk itself;

Volume Group: an LVM Volume Group consists of one or more physical volumes

logical volume: LV is built on VG, and file system can be built on LV

PE (physical extensions): the smallest storage unit that can be allocated in the PV physical volume. The PE size can be specified, and the default is 4MB

LE (logical extension): the smallest storage unit that can be allocated in LV logical volume. In the same volume group, the size of LE and PE are the same, and one-to-one correspondence

Summary of minimum storage units:

Name minimum storage unit

Hard disk sector (512 bytes)

File system block (1K or 4K) ා mkfs.ext4 -b 2048 /dev/sdb1, maximum support to 4096

raid chunk (512K) #mdadm -C -v /dev/md5 -l 5 -n 3 -c 512 -x 1 /dev/sde{1,2,3,5}

LVM PE (4M) # vgcreate -s 4M vg1 /dev/sdb{1,2}

The main elements of LVM are as follows:

Summary: multiple disks / partitions / raid - "multiple physical volumes PV -" synthetic volume group VG - "logical volume from VG LV -" format LV mount and use
1.2 LVM benefits

Use volume groups to make multiple hard disk spaces look like a large hard disk

With logical volumes, you can partition sdb1 sdb2 sdc1 sdd2 sdf across multiple hard disk spaces

When using a logical volume, it can dynamically resize it when space is low

When adjusting the size of the logical volume, you do not need to consider the location of the logical volume on the hard disk or worry about the lack of available continuous space

LV and VG can be created, deleted and resized online. The file system on LVM also needs to be resized.

Allows you to create a snapshot that can be used to save a backup of the file system.

RAID+LVM: LVM is the volume management method of software, while RAID is the disk management method. For important data, RAID is used to protect the physical disk from business interruption due to failure. LVM is used to realize the benign management of volumes and make better use of disk resources.
2 basic steps to create LVM

1) Physical disks are formatted as PVS (spaces are divided into PE's) and PVS contain PE's

2) Different PV is added to the same VG, (PE of different PV enters the PE pool of VG) ා VG includes PV

3) Create LV logical volume in VG, create based on PE, (PE composing LV may come from different physical disks) ා LV create based on PE

4) LV can be directly used for mounting after formatting

5) In fact, the expansion and reduction of LV is to increase or decrease the number of PE that constitutes the LV, and its process will not lose the original data
2.1 commands commonly used by LVM

function

PV management command

VG management command

LV management command

Scan scan

pvscan

vgscan

lvscan

Create create

pvcreate

vgcreate

lvcreate

Display display

pvdisplay

vgdisplay

lvdisplay

Remove remove

pvremove

vgremove

lvremove

extend extension

vgextend

lvextend

reduce reduction

vgreduce

lvreduce

Some view commands will be used for the following operations:

View volume name

Simple corresponding volume information viewing

Scan all related corresponding volumes

View detailed corresponding volume information

Physical volume

pvs

pvscan

pvdisplay

Volume group

vgs

vgscan

vgdisplay

Logical volume

lvs

lvscan

lvdisplay
2.2 create and use LVM logical volumes
Create PV

Add one sdb disk 
[root@xuegod63 ~]# fdisk /dev/sdb    #Create 4 primary partitions, 1G each  
[root@xuegod63 ~]# ls /dev/sdb*
/dev/sdb  /dev/sdb1  /dev/sdb2  /dev/sdb3  /dev/sdb4
//Set partition type code: fdisk / dev / SDB = = > t = = > select partition number = = = > 8e = = = > W 
//Note: now the system is very intelligent. You can directly use the default 83 Linux partition or create a pv partition.
 
[root@xuegod63 ~]# pvcreate /dev/sdb{1,2,3,4}   #Create pv
  Physical volume "/dev/sdb1" successfully created.
  Physical volume "/dev/sdb2" successfully created.
  Physical volume "/dev/sdb3" successfully created.
  Physical volume "/dev/sdb4" successfully created.
 
[root@xuegod63 ~]# pvdisplay /dev/sdb1    #View physical volume information
  "/dev/sdb1" is a new physical volume of "1.00 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdb1
  VG Name               
  PV Size               1.00 GiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               SHKFwf-WsLr-kkox-wlee-dAXc-5eL0-hyhaTV​

To create a vg volume group:

Syntax:      vgcreate  vg Name  pv You can have more than one name pv
[root@xuegod63 ~]#  vgcreate vg01 /dev/sdb1
  Volume group "vg01" successfully created 
[root@xuegod63 ~]#  vgs
  VG   #PV #LV #SN Attr   VSize    VFree   
  vg01   1   0   0 wz--n- 1020.00m 1020.00m
[root@xuegod63 ~]#  vgdisplay vg01
  --- Volume group ---
  VG Name               vg01
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1020.00 MiB
  PE Size               4.00 MiB
  Total PE              255
  Alloc PE / Size       0 / 0  ​

Create LV

lvcreate -n Specify the name of the new logical volume  -L Appoint lv Size SIZE(M,G) (-l: Small l Appoint LE Quantity) vgname
[root@xuegod63 ~]#  lvcreate -n lv01 -L 16M vg01
  Logical volume "lv01" created.
[root@xuegod63 ~]#  lvcreate -n lv02 -l 4 vg01
  Logical volume "lv02" created.
[root@xuegod63 ~]# lvs
  LV   VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv01 vg01 -wi-a----- 16.00m                                                    
  lv02 vg01 -wi-a----- 16.00m                                          
 
[root@xuegod63 ~]# pvdisplay /dev/sdb1
  --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               vg01
  PV Size               1.00 GiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              255
  Free PE               247
  Allocated PE          8   # Allocated ['æ l ə ke ɪ t ɪ d] allocation, 8 PE S have been used
[root@xuegod63 ~]# vgdisplay vg01
 
  Alloc PE / Size       8 / 32.00 MiB   #8 PE's have been used, 32MB
  Free  PE / Size       247 / 988.00 MiB​

2.3 file system format and mount

[root@xuegod63 ~]# mkdir /lv01
[root@xuegod63 ~]# ls  /dev/vg01/   #View logical volumes
lv01  lv02
[root@xuegod63 ~]# ll  /dev/vg01/lv01  #In fact, lv01 is the soft link of dm-0
lrwxrwxrwx 1 root root 7 5 Month 1819:02 /dev/vg01/lv01 -> ../dm-0
 
[root@xuegod63 ~]# mkfs.ext4 /dev/vg01/lv01
[root@xuegod63 ~]# mount /dev/vg01/lv01  /lv01
[root@xuegod63 ~]# df -Th /lv01
//Filesystem type capacity used% free used mount point
/dev/mapper/vg01-lv01 ext4   15M  268K   14M    2% /lv01 
[root@xuegod63 ~]#echo  "/dev/vg01/lv01 /lv01 ext4 defaults 0 0"  >> /etc/fstab​

2.3 for specifying PE size

Appoint PE Parameters for size: -s       ,If the stored data are all large files, then PE As large as possible, fast reading speed
[root@xuegod63 ~]#  vgcreate -s 16M vg02 /dev/sdb2
  Volume group "vg02" successfully created
PE And the maximum size is 512 M
[root@xuegod63 ~]#  vgdisplay vg02
  --- Volume group ---
  VG Name               vg02
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1008.00 MiB
  PE Size               16.00 MiB    #It's already 16MB

2.4 LV capacity expansion

First, determine whether there is available expansion space, because the space is created from VG, and LV cannot expand across VG
 [root@xuegod63 ~]# vgs
  VG   #PV #LV #SN Attr   VSize    VFree   
  vg01   1   2   0 wz--n- 1020.00m  988.00m
  vg02   1   0   0 wz--n- 1008.00m 1008.00m
 The command used is as follows:
 
 
Expand logical volume
[root@xuegod63 ~]# lvextend -L +30m /dev/vg01/lv01    
Note: when specifying the size, it is different to expand 30m and to 30m
 Capacity expansion 30m = = = = = - L + 30m
 Expand capacity to 30m = ===== > L 30m
[root@xuegod63 ~]#  lvextend -L +30m /dev/vg01/lv01    
  Rounding size to boundary between physical extents: 32.00 MiB.
  Size of logical volume vg01/lv01 changed from 16.00 MiB (4 extents) to 48.00 MiB (12 extents).
  Logical volume vg01/lv01 successfully resized.
 
[root@xuegod63 ~]# lvs
  LV   VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  Lv01 vg01 - wi Ao ---- 48.00m × LV has been successfully expanded                                                
  lv02 vg01 -wi-a----- 16.00m   
 
[root@xuegod63 ~]#  df -Th /lv01
 Filesystem type capacity used% free used mount point
/dev/mapper/vg01-lv01 ext4   15M  268K   14M    2% /lv01​

Note: it can be seen that although LV has been expanded, the file system size is still the same. Now we start to expand the file system

ext4 file system expansion command syntax: resize2fs logical volume name

xfs file system expansion command syntax: xfs ﹣ growfs mount point

The difference between resize2fs and xfs'growfs is that the parameters passed are different. Xfs'growfs is the mount point used; resize2fs is the logical volume name, and the resize2fs command cannot be used for XFS type file systems

[root@xuegod63 ~]#  resize2fs /dev/vg01/lv01
resize2fs 1.42.9 (28-Dec-2013)
Filesystem at /dev/vg01/lv01 is mounted on /lv01; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/vg01/lv01 is now 49152 blocks long.
[root@xuegod63 ~]#  df -Th /lv01
//Filesystem type capacity used% free used mount point
/dev/mapper/vg01-lv01 ext4   46M  (Expansion success) 522 K   43M    2% /lv01
[root@xuegod63 ~]# lvextend -L 80M -r /dev/vg01/lv01  #Direct expansion to 80M space, one step in place, no need to expand the file system
[root@xuegod63 ~]# df -T /lv01/
//File system type 1K - block used% used% mount point
/dev/mapper/vg01-lv01 ext4 78303   776 73761    2% /lv01
[root@xuegod63 ~]# df -Th /lv01/
//Filesystem type capacity used% free used mount point
/dev/mapper/vg01-lv01 ext4   77M  776K   73M    2% /lv01​

2.5 VG capacity expansion

[root@xuegod63 ~]# vgs
  VG   #PV #LV #SN Attr   VSize    VFree   
  vg01   1   2   0 wz--n- 1020.00m  924.00m
  vg02   1   0   0 wz--n- 1008.00m 1008.00m 
vg Expansion scenario: vg There is not enough space in the volume group. You need to add a new hard disk
[root@xuegod63 ~]# pvcreate /dev/sdb3  # Create pv
[root@xuegod63 ~]#  vgextend vg01 /dev/sdb3  #Capacity expansion
  Volume group "vg01" successfully extended
[root@xuegod63 ~]# vgs
  VG   #PV #LV #SN Attr   VSize    VFree   
  vg01   2   2   0 wz--n-    1.99g   <1.90g
  vg02   1   0   0 wz--n- 1008.00m 1008.00m​

2.6 LVM reduction

LVM can be increased dynamically, can it be reduced dynamically?

Answer: LVM can be increased or reduced dynamically, but XFS does not support dynamic reduction, so we cannot achieve dynamic reduction based on XFS. btrfs file system supports online reduction.

 [root@xuegod63 ~]#  lvreduce -L -20m /dev/vg01/lv01
  WARNING: Reducing active and open logical volume to 60.00 MiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce vg01/lv01? [y/n]: y
  Size of logical volume vg01/lv01 changed from 80.00 MiB (20 extents) to 60.00 MiB (15 extents).
  Logical volume vg01/lv01 successfully resized.   #Reduce success
//But the file system did not shrink successfully:
[root@xuegod63 ~]# df -h /lv01/
//File system capacity used% free used% mount point
/dev/mapper/vg01-lv01   77M  776K   73M    2% /lv01   #Found that the space on the file system has not changed
 
[root@xuegod63 ~]# lvextend -L 10M -r /dev/vg01/lv01  #These two commands can't be executed successfully
[root@xuegod63 ~]#  resize2fs /dev/vg01/lv01   #These two commands can't be executed successfully
 ​

Note: to reduce VG, make sure that your physical volume is used, because it cannot reduce a PV in use

[root@xuegod63 ~]#  vgs
  VG   #PV #LV #SN Attr   VSize    VFree   
  vg01   2   2   0 wz--n-    1.99g   <1.92g
  vg02   1   0   0 wz--n- 1008.00m 1008.00m 
[root@xuegod63 ~]# pvs
  PV         VG   Fmt  Attr PSize    PFree   
  /dev/sdb1  vg01 lvm2 a--  1020.00m  944.00m
  /dev/sdb2  vg02 lvm2 a--  1008.00m 1008.00m
  /dev/sdb3  vg01 lvm2 a--  1020.00m 1020.00m
  /dev/sdb4       lvm2 ---     1.00g    1.00g
[root@xuegod63 ~]# cp -r /boot/grub /lv01/   #Copy some test data
[root@xuegod63 ~]# vgreduce vg01 /dev/sdb1   #Failed to move sdb1 out because sdb1 is in use
  Physical volume "/dev/sdb1" still in use​

Extension: if sdb1 is a disk array, and this disk array has been used for too long, what should we do if we have to remove it?

Mobile data:
[root@xuegod63 ~]# pvmove  /dev/sdb1  /dev/sdb3  #Move the data on sdb1 to the newly added sdb3 pv
  /dev/sdb1: Moved: 23.53%
  /dev/sdb1: Moved: 76.47%
  /dev/sdb1: Moved: 100.00%
[root@xuegod63 ~]#  vgreduce vg01 /dev/sdb1  #Move data out after moving
  Removed "/dev/sdb1" from volume group "vg01"
[root@xuegod63 ~]# pvs
  PV         VG   Fmt  Attr PSize    PFree   
  /dev/sdb1       lvm2 ---     1.00g    1.00g
  /dev/sdb2  vg02 lvm2 a--  1008.00m 1008.00m
  /dev/sdb3  vg01 lvm2 a--  1020.00m  952.00m  #sdb3 is the only one in vg01

2.7 LVM deletion

Create LVM process:

pvcreate create PV - > vgcreate create volume group - > lvcreate create logical volume - > mkfs.xfs LV format - > mount

Delete LVM process:

umount unmount - > lvremove LV move out all logical volumes in volume group - > vgremove VG move out volume group - > pvremove move out pv

[root@xuegod63 ~]# umount /lv01
[root@xuegod63 ~]#  lvremove /dev/vg01/lv01
Do you really want to remove active logical volume vg01/lv01? [y/n]: y
  Logical volume "lv01" successfully removed
[root@xuegod63 ~]# lvs
  LV   VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv02 vg01 -wi-a----- 16.00m        #lv01 is no longer visible                                           
[root@xuegod63 ~]#  vgremove vg01   #Move directly out of a volume group
Do you really want to remove volume group "vg01" containing 1 logical volumes? [y/n]: y
Do you really want to remove active logical volume vg01/lv02? [y/n]: y  
#If there is lv in the volume group, when you move it out, you will be prompted if you want to move it out. Let's move it out directly here
  Logical volume "lv02" successfully removed
  Volume group "vg01" successfully removed
[root@xuegod63 ~]# vgs
  VG   #PV #LV #SN Attr   VSize    VFree   
  vg02   1   0   0 wz--n- 1008.00m 1008.00m    #No vg01 
 
//Remove PV sdb1
[root@xuegod63 ~]# pvs
  PV         VG   Fmt  Attr PSize    PFree   
  /dev/sdb1       lvm2 ---     1.00g    1.00g
  /dev/sdb2  vg02 lvm2 a--  1008.00m 1008.00m
  /dev/sdb3       lvm2 ---     1.00g    1.00g
  /dev/sdb4       lvm2 ---     1.00g    1.00g
[root@xuegod63 ~]# pvremove /dev/sdb1   #Moved out
  Labels on physical volume "/dev/sdb1" successfully wiped.
[root@xuegod63 ~]# pvs
  PV         VG   Fmt  Attr PSize    PFree   
  /dev/sdb2  vg02 lvm2 a--  1008.00m 1008.00m
  /dev/sdb3       lvm2 ---     1.00g    1.00g
  /dev/sdb4       lvm2 ---     1.00g    1.00g​

Keywords: Linux snapshot Mobile

Added by damienwc on Thu, 27 Feb 2020 11:19:25 +0200