This howto guide will go through the process of moving data off a disk in a LVM array, so it can be physically removed from the system.
Make sure you have enough free space on the filesystem to remove one of the disks.
In this example, I have 3x 2TB drives, and 1x 1.5TB drive. I will be removing the 1.5TB drive, so that I can later replace it with a larger drive.
|Drive Size||LVM Partition Size||Partition Device||Notes|
|2 TB||1.8 TB||/dev/sda5||(Contains the OS in the other ~200 GB)|
|2 TB||2 TB||/dev/sdb1|
|2 TB||2 TB||/dev/sdc1|
|1.5 TB||1.5 TB||/dev/sdd1||(this is the drive to be removed)|
The volume group name is “vg_storage1”, and the Logical volume name is “LogVol00”.
Note: This process worked fine for me. Make sure you have a backup of your data before doing this, as you could quite easily loose all your data if you get it wrong.
Unmount the filesystem
Stop any processes that may be accessing the filesystem that is on the LVM array you will be resizing.
This may include things like samba, nfs, ftp servers, etc. Anything that may be blocking access to unmount the filesystem.
Unmount the filesystem on the LVM array:
$ sudo umount /mnt/storage
In this case the mount point was /mnt/storage. Check your mount point using the ‘mount’ command.
Checking the filesystem
Check the filesystem for errors before attempting to resize it:
$ sudo e2fsck /dev/mapper/vg_storage1-LogVol00
Resize the filesystem
Resize the filesystem to a size that is lower than the total of the LVM array size after the drive is removed.
In my example, I am shrinking the filesystem down to 5000 gigabytes, which will fit on the remaining 3 hard drives.
This process may take some time to run through.
$ sudo resize2fs -p /dev/mapper/vg_storage1-LogVol00 5000G
$ sudo resize2fs -p /dev/mapper/vg_storage1-LogVol00 5000G resize2fs 1.41.11 (14-Mar-2010) Resizing the filesystem on /dev/mapper/vg_storage1-LogVol00 to 1310720000 (4k) blocks. Begin pass 2 (max = 281649206) Relocating blocks XXXXXXXXX------------------------------- Begin pass 3 (max = 55039) Scanning inode table XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX The filesystem on /dev/mapper/vg_storage1-LogVol00 is now 1310720000 blocks long. $
Now the filesystem has been reduced in size, you can reduce the logical volume size:
Shrink the Logical Volume
Make sure you resize the logical volume size to be larger than the filesystem size, so it doesnt corrupt the filesystem. It must also be smaller than the total amount of the drives after the drive is removed. In this example, I reduced the logical volume to 5200GB (or 5.08TB).
$ sudo lvreduce -L 5200G /dev/vg_storage1/LogVol00
$ sudo lvreduce -L 5200G /dev/vg_storage1/LogVol00 WARNING: Reducing active logical volume to 5.08 TiB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce LogVol00? [y/n]: y Reducing logical volume LogVol00 to 5.08 TiB Logical volume LogVol00 successfully resized $
Next you need to move any physical extents off the drive you want to remove.
Moving Physical Extents from the drive
Before removing the drive, you need to move any physical extents data off the drive.
$ sudo pvmove -i 600 -v /dev/sdd1
The “-i 600” switch sets the interval time for status reports of the progress to every 10 minutes. This ran through almost instantly for me, but depending on how much data you have, this may differ.
$ sudo pvmove -i 600 -v /dev/sdd1 Finding volume group "vg_storage1" Archiving volume group "vg_storage1" metadata (seqno 14). Creating logical volume pvmove0 No data to move for vg_storage1 $
Remove the drive from the volume group
Reduce the volume group to remove the linking between the volume group and the drive:
$ sudo vgreduce vg_storage1 /dev/sdd1
$ sudo vgreduce vg_storage1 /dev/sdd1 Removed "/dev/sdd1" from volume group "vg_storage1" $
Remove the drive from the Physical Volume
Remove the drive from the physical volume, ready to remove the drive:
$ sudo pvremove /dev/sdd1
$ sudo pvremove /dev/sdd1 Labels on physical volume "/dev/sdd1" successfully wiped $
You can now power off the system, and remove the drive.
Post Removal Tips
You can now resize the logical volume to use the maximum size, assuming you did the resizing with a bit of a buffer in the values of the partition and volume sizes.
(Note: Make sure the file system is not currently mounted).
Increase the size of the Logical volume
To resize the logical volume to the maximum size, you first need to run the following command, and note the “Total PE” value in the output:
$ sudo vgdisplay
$ sudo vgdisplay --- Volume group --- VG Name vg_storage1 System ID Format lvm2 Metadata Areas 4 Metadata Sequence No 16 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 4 Act PV 4 VG Size 5.08 TiB PE Size 4.00 MiB Total PE 1403543 Alloc PE / Size 1331200 / 5.08 TiB Free PE / Size 72343 / 282.59 GiB VG UUID qHNraE-QF3d-dknW-3qVE-SdM2-iHfr-CKkYVJ $
Take the Total PE value, and use it in the following command (1403543 in this case):
$ sudo lvextend -l 1403543 /dev/vg_storage1/LogVol00
$ sudo lvextend -l 1403543 /dev/vg_storage1/LogVol00 Extending logical volume LogVol00 to 5.35 TiB Logical volume LogVol00 successfully resized $
Increase the size of the filesystem
Finally, resize the filesystem to use the maximum size available:
$ sudo resize2fs -p /dev/vg_storage1/LogVol00
By not specifying any filesystem size, it will default to resizing to use the maximum space available.
$ sudo resize2fs -p /dev/vg_storage1/LogVol00 resize2fs 1.41.11 (14-Mar-2010) Resizing the filesystem on /dev/vg_storage1/LogVol00 to 1437228032 (4k) blocks. Begin pass 1 (max = 26217) Extending the inode table XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX The filesystem on /dev/vg_storage1/LogVol00 is now 1437228032 blocks long. $
Remount the filesystem
Mount the file system again:
$ sudo mount /mnt/storage