Migrating LVM Partitions to New Logical Volume (Drive) – Part VI

This is the 6th part of our ongoing Logical Volume Management series, in this article we will show you how to migrate existing logical volumes to other new drive without any downtime. Before moving further, I would like to explain you about LVM Migration and its features.

LVM Migration
LVM Storage Migration

What is LVM Migration?

LVM migration is one of the excellent feature, where we can migrate the logical volumes to a new disk without the data-loss and downtime. The purpose of this feature is it to move our data from old disk to a new disk. Usually, we do migrations from one disk to other disk storage, only when an error occur in some disks.

Features of Migration

  1. Moving logical volumes from one disk to other disk.
  2. We can use any type of disk like SATA, SSD, SAS, SAN storage iSCSI or FC.
  3. Migrate disks without data loss and downtime.

In LVM Migration, we will swap every volumes, file-system and it’s data in the existing storage. For example, if we have a single Logical volume, which has been mapped to one of the physical volume, that physical volume is a physical hard-drive.

Now if we need to upgrade our server with SSD Hard-drive, what we used to think at first? reformat of disk? No! we don’t have to reformat the server. The LVM has the option to migrate those old SATA Drives with new SSD Drives. The Live migration will support any kind of disks, be it local drive, SAN or Fiber channel too.


  1. Creating Flexible Disk Storage with Logical Volume Management – Part 1
  2. How to Extend/Reduce LVM’s in Linux – Part 2

There are two ways to migrate LVM partitions (Storages), one is using Mirroring method and other using pvmove command. For demonstration purpose, here I’m using Centos6.5, but same instructions can also be supported for RHEL, Fedora, Oracle Linux and Scientific Linux.

My Server Setup
Operating System :	CentOS 6.5 Final
IP Address	 :
System Hostname	 :	lvmmig.tecmintlocal.com

Step 1: Check for Present Drives

1. Assume we are already having one virtual drive named “vdb“, which mapped to one of the logical volume “tecmint_lv“. Now we want to migrate this “vdb” logical volume drive to some other new storage. Before moving further, first verify that the virtual drive and logical volume names with the help of fdisk and lvs commands as shown.

# fdisk -l | grep vd
# lvs
Check Logical Volume Disk
Check Logical Volume Disk

Step 2: Check for Newly added Drive

2. Once we confirm our existing drives, now it’s time to attach our new SSD drive to system and verify newly added drive with the help of fdisk command.

# fdisk -l | grep dev
Check New Added Drive
Check New Added Drive

Note: Did you see in the above screen, that the new drive has been added successfully with name “/dev/sda“.

Step 3: Check Present Logical and Physical Volume

3. Now move forward to create physical volume, volume group and logical volume for migration. Before creating volumes, make sure to check the present logical volume data under /mnt/lvm mount point. Use the following commands to list the mounts and check the data.

# df -h
# cd /mnt/lvm
# cat tecmint.txt
Check Logical Volume Data
Check Logical Volume Data

Note: For demonstration purpose, we’ve created two files under /mnt/lvm mount point, and we migrate these data to a new drive without any downtime.

4. Before migrating, make sure to confirm the names of logical volume and volume group for which physical volume is related to and also confirm which physical volume used to hold this volume group and logical volume.

# lvs
# vgs -o+devices | grep tecmint_vg
Confirm Logical Volume Names
Confirm Logical Volume Names

Note: Did you see in the above screen, that “vdb” holds the volume group tecmint_vg.

Step 4: Create New Physical Volume

5. Before creating Physical Volume in our new added SSD Drive, we need to define the partition using fdisk. Don’t forget to change the Type to LVM(8e), while creating partitions.

# pvcreate /dev/sda1 -v
# pvs
Create Physical Volume
Create Physical Volume

6. Next, add the newly created physical volume to existing volume group tecmint_vg using ‘vgextend command’

# vgextend tecmint_vg /dev/sda1
# vgs
Add Physical Volume
Add Physical Volume

7. To get the full list of information about volume group use ‘vgdisplay‘ command.

# vgdisplay tecmint_vg -v
List Volume Group Info
List Volume Group Info

Note: In the above screen, we can see at the end of result as our PV has added to the volume group.

8. If in-case, we need to know more information about which devices are mapped, use the ‘dmsetup‘ dependency command.

# lvs -o+devices
# dmsetup deps /dev/tecmint_vg/tecmint_lv

In the above results, there is 1 dependencies (PV) or (Drives) and here 17 were listed. If you want to confirm look into the devices, which has major and minor number of drives that are attached.

# ls -l /dev | grep vd
List  Device Information
List Device Information

Note: In the above command, we can see that major number with 252 and minor number 17 is related to vdb1. Hope you understood from above command output.

Step 5: LVM Mirroring Method

9. Now it’s time to do migration using Mirroring method, use ‘lvconvert‘ command to migrate data from old logical volume to new drive.

# lvconvert -m 1 /dev/tecmint_vg/tecmint_lv /dev/sda1
  1. -m = mirror
  2. 1 = adding a single mirror
Mirroring Method Migration
Mirroring Method Migration

Note: The above migration process will take long time according to our volume size.

10. Once migration process completed, verify the converted mirror.

# lvs -o+devices
Verify Converted Mirror
Verify Converted Mirror

11. Once you sure that the converted mirror is perfect, you can remove the old virtual disk vdb1. The option -m will remove the mirror, earlier we’ve used 1 for adding the mirror.

# lvconvert -m 0 /dev/tecmint_vg/tecmint_lv /dev/vdb1
Remove Virtual Disk
Remove Virtual Disk

12. Once old virtual disk is removed, you can re-check the devices for logical volumes using following command.

# lvs -o+devices
# dmsetup deps /dev/tecmint_vg/tecmint_lv
# ls -l /dev | grep sd
Check New Mirrored Device
Check New Mirrored Device

In the above picture, did you see that our logical volume now depends on 8,1 and has sda1. This indicates that our migration process is done.

13. Now verify the files that we’ve migrated from old to new drive. If same data is present at the new drive, that means we have done every steps perfectly.

# cd /mnt/lvm/
# cat tecmin.txt 
Check Mirrored Data
Check Mirrored Data

14. After everything perfectly created, now it’s time to delete the vdb1 from volume group and later confirm, which devices are depends on our volume group.

# vgreduce /dev/tecmint_vg /dev/vdb1
# vgs -o+devices

15. After removing vdb1 from volume group tecmint_vg, still our logical volume is present there because we have migrated it to sda1 from vdb1.

# lvs
Delete Virtual Disk
Delete Virtual Disk

Step 6: LVM pvmove Mirroring Method

16. Instead using ‘lvconvert’ mirroring command, we use here ‘pvmove‘ command with option ‘-n‘ (logical volume name) method to mirror data between two devices.

# pvmove -n /dev/tecmint_vg/tecmint_lv /dev/vdb1 /dev/sda1

The command is one of the simplest way to mirror the data between two devices, but in real environment Mirroring is used more often than pvmove.


In this article, we have seen how to migrate the logical volumes from one drive to other. Hope you have learnt new tricks in logical volume management. For such setup one should must know about the basic of logical volume management. For basic setups, please refer to the links provided on top of the article at requirement section.

If you liked this article, then do subscribe to email alerts for Linux tutorials. If you have any questions or doubts? do ask for help in the comments section.

If You Appreciate What We Do Here On TecMint, You Should Consider:

TecMint is the fastest growing and most trusted community site for any kind of Linux Articles, Guides and Books on the web. Millions of people visit TecMint! to search or browse the thousands of published articles available FREELY to all.

If you like what you are reading, please consider buying us a coffee ( or 2 ) as a token of appreciation.

Support Us

We are thankful for your never ending support.

32 thoughts on “Migrating LVM Partitions to New Logical Volume (Drive) – Part VI”

  1. Hi Babin,

    Thanks for your wonderful and very useful Information.

    I have one issue : My old LV has ext4 FS but i need to move this data in New LUN with XFS FS . this is how


  2. Hello Babin,

    the steps work’s for a vg with single lv.

    i have a question what if we have 3 lvs from single vg :

    sp09lonprod1:~ # vgs
    VG #PV #LV #SN Attr VSize VFree
    datavg 1 3 0 wz–n- 37.00g 1020.00m
    sp09lonprod1:~ # lvs
    LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
    appserverlv datavg -wi-ao– 30.00g
    loglv datavg -wi-ao– 5.00g
    mrlv datavg -wi-ao– 1.00g

    sp09lonprod1:~ # df -h
    Filesystem Size Used Avail Use% Mounted on

    30G 17G 12G 59% /appserver
    1008M 34M 924M 4% /mr
    5.0G 147M 4.6G 4% /log

    == ==
    we maintain only a single disk to clients ( even though we have a an option of adding a new disk to existing vg and extending F/S

    but we don’t do that in our org.) , now i need to migrate all my data and lvs to new disk . how can i do if i follow your steps,

    i normally do with making backup (tar the exixting f/s) and crating a new vg and new lvs on new disk and then copying all the data to the respective mount points after making F/s.. but its to long and time taking..

    can you help me to make this short.

    currents disk’s;

    sp09lonprod1:~ # fdisk -l |grep -i sd
    Disk /dev/sdb: 39.7 GB, 39728447488 bytes
    /dev/sdb1 1 4830 38796272 8e Linux LVM
    Disk /dev/sda: 16.1 GB, 16106127360 bytes
    /dev/sda1 * 1 64 512000 83 Linux
    /dev/sda2 64 1959 15215616 8e Linux LVM

    sdb1(30GB) needs to be moved to new disk xx of 65gb

    the same size of existing lvs of old disk need to be presented to the new disk

    do i need to create 3 pvs as per required size on new disk and do as your steps specified in this doc ???

    Thanks in advance,

  3. Hi Babin,

    My colleague had created VM onVirtual Box. It is arround 50 GB total. out of that as follows
    Filesystem Size Used Avail Use% Mounted on
    /dev/sda2 35G 12G 22G 34% /
    /dev/sda1 190M 12M 169M 7% /boot
    /dev/sda3 13G 12G 465M 97% /data
    none 1.1G 0 1.1G 0% /dev/shm
    Main Application is running on /data. It is about to full the data partition.
    It is not lvm partition.

    Please give me any suggestion.
    is any way extend the /data size or reduce the size of / and increase the /data partion.

  4. Hi Babin,
    Thanks for such nice article.
    I am having some scenario such as we want to move the Stripped LV using pvmove to another server.
    Can you help us regarding the same. As I have mentioned that these are Stripped LV so destination should have the Stripped LV after migration.

    Thank you,
    Anil Rarthod

  5. Babin I need your help!

    I have a VM image with two virtual disks. On this VM, there is only one volume group, vg0. But vg0 is made of two physical volumes – one from each disk.

    I have added a third disk and cloned the two primary partitions on the above OS, but now need to copy over the LVM partitions to the disk. How can I do this in Suse 11? I thinking to just copy the whole vg0 over to the new disk, not sure how…


  6. Hi
    I am very new to linux.just a doubt,If I use lvmove for migration.the lvextend and filesystem resize also required to reflect the new size of disk in lvm.


Got something to say? Join the discussion.

Have a question or suggestion? Please leave a comment to start the discussion. Please keep in mind that all comments are moderated and your email address will NOT be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.