If we’re trying to migrate Data from one storage to another for some reason, we need to ensure that there is no dependency on the Disks before it is removed. pvmove can be used to move data from one LUN to another without downtime.


Before running pvmove, it is recommended to create PVs of same size from the newer storage to where you want to migrate the data to.

pvmove works only on PVs, and data from raw mounted disks cannot be migrated with this procedure. A different procedure is explained further down below (steps involved in migration of data from raw disks).

Migrating data from Physical Volumes (PVs)
The below steps are the same, regardless of the type/vendor of underlying storage array. The disks can be from same storage array or different storage array as well.

#create PV from the new storage LUN
pvcreate /dev/sdf

#extend VG that contains the old LUN with the new PV
vgextend vg_apps /dev/sdf

#migrage all extents from old PV (/dev/sdc) to new PV (/dev/sdf). 
pvmove --atomic /dev/sdc /dev/sdf

#One can also migrate extents of a specific LV alone to new PV.
pvmove --atomic -v -n lv_apps /dev/sdc /dev/sdf

Below is an example of pvmove run on a multipath enabled device.

#create PV from the new storage LUN for a multipath device
pvcreate /dev/mapper/mpath2

#extend VG that contains the old LUN with the new PV
vgextend vg_apps /dev/mapper/mpath2

#migrage all extents from old PV (/dev/mapper/mapth0) to new PV (/dev/mapper/mapth2).
pvmove --atomic /dev/mapper/mapth0 /dev/mapper/mpath2

#One can also migrate extents of a specific LV alone to new PV.
pvmove --atomic -v -n lv_apps /dev/mapper/mapth0 /dev/mapper/mpath2

NOTE: If pvmove is interrupted for any reason (e.g. the machine crashes) then run pvmove again without any PV arguments to restart any operations that were in progress from the last checkpoint. --atomic option ensures that all affected LVs are moved to the destination PV, or none are, if the operation is aborted. This will ensure that all PVs are in same place, either in source or destination, so that extents are not moved partially. If for any reason pvmove is interrupted, the source will still remain intact.

Migrating data from raw disks, that are not PVs
If you’re trying to migrate data from a raw mounted disk, pvmove will not work. Hence online migration (without downtime) is not possible.

The recommended solution follows the below sequence.

Scan for new storage LUNS presented to the server. Lets assume that the newly scanned storage LUN is /dev/sde
create filesystem on /dev/sde with same fstype as on the old storage.
Assuming that the existing disk /dev/sdd is an ext4 file system,

mkfs.ext4 /dev/sde

Before starting to migrate data from /dev/sdd mounted on /apps to /dev/sde, stop access to /apps by stopping all applications using it.

#create new temporary directory to mount new storage
#make sure /apps_new has the same permissions and ownership as /apps
mkdir /apps_new

#Temporarily mount /dev/sde on /apps_new
mount /dev/sde /apps_new  

#copy data from /apps to /apps_new with rsync
rsync -avh /apps/ /apps_new/

#unmount both the file systems
umount /dev/sdd /dev/sde 

#mount new storage (/dev/sde) on /apps
mount /dev/sde /apps

Once application is started and data is confirmed as intact, the old LUN can be removed. Ensure /etc/fstab is updated with the changed disk.