Friday, June 11, 2010

Migrating an ext3/4 filesystem to LVM with minimal downtime

There have been a few posts asking about moving a Linux file system from a raw partition onto an LVM. The advantages to using LVMs are numerous, but the advice has generally been to reinstall from scratch.

The problem is, several of the steps must be performed while the file system is unmounted. Specifically, both reducing the ext2/3/4 file system size and moving it from the raw partition onto the logical volume. Additionally, the /boot directory must be moved to a separate physical partition, and the initramfs it contains needs to have the lvm2 tools installed.

The following steps can be followed to migrate from a single monolithic partition (+ swap) to a typical LVM configuration where root and swap are on logical volumes. It may be possible to encrypt the LVM physical volume during this process, and I may explore that in a later post.

Assumptions:
/dev/sda1 contains a monolithic / file system
/dev/sda5 contains the swap partition
The boot loader is GRUB2

Step 1: Make a backup

Seriously. Don't have any critical data that's only available on this system. In fact, you may want to take an older, idle system and practice this whole procedure on it. There are plenty of opportunities to screw up, and this guide assumes you're already pretty familiar with the LVM2 and ext tools and HDD partitioning.

Step 2: Prepare the system

Add lvm2 and dependant packages if needed. On Ubuntu, this would be:
# apt-get -y install lvm2

Determine whether you will need a swap file. Take a look at the output of top to see if you've got enough physical memory that you can just disable swap for the duration of the migration. If you will not need swap, skip ahead to plan your partition size. If you will need it, migrate your swap from a partition to a swap file using the following steps (example below creates a 256MB file; increase as you see fit):
# dd if=/dev/zero of=/swapfile bs=1048576 count=256
# mkswap /swapfile
# echo "/swapfile none swap sw 0 0" >> /etc/fstab
# swapon -a
# cat /proc/swaps
Verify that /proc/swaps shows that /swapfile is active and the right size.

Step 3: Make a plan

You need to determine the size and location of a temporary partition to hold your file system while you're making the conversion. You will need the file system "block size" from:
# dumpe2fs -h /dev/sda1
On my test plaform, it is 4096, but may vary depending on your disk size.

You will also need the sector size from:
# fdisk -lu /dev/sda
It is almost always 512. Using these two values, calculate the sectors per block by dividing block size by sector size (eg. 8)

From the fdisk output, you will also need the starting sector of partition 1 (usually either 63 or 2048), and the count of total sectors displayed in the header above the sector size.

You will also need to know the amount of space used on your root filesystem:
# df -h /

Now determine your temporary file system size. Take your used space, and add a comfortable margin of 10-20%, and round up. For instance, if you're using 1.5GB, you may want to make it 3GB. If you're using 45GB, you may want to keep at least 50GB, perhaps more.

In any case, since you'll be duplicating it across your internal disk, your target size should be a little less than 50% of your total disk size. If you cannot shrink the file system to less than 50% of the HDD size, you may have luck with an external eSATA or USB drive, but test it first.

Step 4: Boot into LiveCD

Now we get to the scary part. Since you're bringing the system down, if there are business critical apps running on it, you'll have to do this during a maintenance window.

Step 5. Re-size the file system

This is the main reason the file system must be unmounted. This command is not supported on a mounted ext2/3/4 file system. Substitute the value of 50G in the example below with the size you determined above.
# e2fsck -f /dev/sda1
# resize2fs /dev/sda1 50G

Determine the number of sectors used by the re-sized file system. Do this by multiplying the "Block count" from the following command with the sectors per block calculated above:
# dumpe2fs -h /dev/sda1
Add about 10000 sectors to this for the LVM headers, and subtract it from the drive's total sector count recorded above to determine the starting sector for partition 3.

Step 6. Repartition the drive

This will make room for the temporary partition
# fdisk -cu /dev/sda
Press 'p' to print the current partition table.
Press 'd' and '5' to delete the swap partition, 'd' and '2' to delete the extended partition table, and 'd' to delete the last remaining partition.
Create partition 3 to hold your temporary file system by pressing 'n', 'p', '3', "<start sector P3>", "" (New Partition, Primary, partition 3, starting sector, max size). Substitute the starting sector with what was calculated above.
Recreate partition 1 with the size determined above by pressing 'n', 'p', '1', "<start sector P1>", "" (New partition, Primary, partition 1, starting at 63 (or 2048), max size).
Finally, press 'w' to write your changes to disk and recreate the device nodes.

If ioctl is unable to re-read the partition table after writing to disk, you must reboot into the LiveCD to recreate the proper block devices.

Step 7. Install LVM tools.

You'll need internet connectivity for this, so if you're using a fixed IP, you'll need to first configure the interface and default route manually. Then install the LVM2 tools using a command such as:
# apt-get -y install lvm2

Step 8. Create your logical volume.

Use the following sequence of commands to create a mapped device for your root file system:
# pvcreate /dev/sda3
# vgcreate os /dev/sda3
# lvcreate -l 100%VG -n root os

Verify there are enough sectors to hold existing file system
# blockdev --getsize /dev/mapper/os-root
The number of sectors returned must be more than the file system size calculated above. If not, subtract a few thousand sectors from your partition 3 starting sector, repartition, and try again.

Step 9. Copy your root file system.

This is the other command that must be done while the file system is unmounted:
# dd if=/dev/sda1 of=/dev/mapper/os-root

Step 10. Repartition the drive again

Deactivate the volume group to allow the next command to succeed:
# vgchange -a n os

Up until this point, your system would still boot if you lost power. From this point until GRUB is reinstalled, you'll need the LiveCD to recover your system.
# fdisk -cu /dev/sda
Delete partition 1 by entering 'd', '1'.
Create a new 300MB partition 1 for the /boot file system by entering 'n', 'p', '1', "", "+300M".
Create an extended partition table with the remaining space by entering 'n', 'e', '2', "", "".
Create a logical partition filling the extended partition by entering 'n', 'l', "", "".
Change the file system type value on partition 5 by entering 't', '5', "8e".
Write changes to disk by entering 'w'.

Step 11. Separate /boot to its own partition

The /boot directory contains the kernel and initramfs as well as all of GRUB's modules and config files. It must be available as a separate file system in order to map the logical volume and access root.
# vgchange -a y os
# mount /dev/mapper/os-root /mnt

# mke2fs /dev/sda1
# mkdir /newboot
# mount /dev/sda1 /newboot
# cd /mnt/boot
# tar cf - . | (cd /newboot; tar xvf - )
# echo "/dev/sda1 /boot ext2 defaults 2 0" >> /mnt/etc/fstab

Step 12. Reinstall GRUB

These steps are for GRUBv2. Substitute appropriate commands for GRUB Legacy.
# for mp in dev sys proc; do mount -o bind /$mp /mnt/$mp; done
# chroot /mnt
# update-grub
# grub-install /dev/sda
Handle any errors you see at this point or your system will not boot.

Step 13. Reboot

You should be able to boot into your original system at this point, albeit with reduced file system space.
If you get a GRUB rescue prompt, something went wrong with your GRUB install.
If you get a GRUB error about no disk found, the GRUB configuration was not built correctly.
If you get the {initramfs} prompt, it could not find the root file system in the logical volume, was lvm2 installed before booting into the LiveCD?
I got a blank screen the first boot, but it appeared to be an X.org issue. I hit Ctrl-Alt-Del, and it came up fine upon rebooting. It scared me, but turned out to be nothing.

Step 14. Move your logical volume.

If you're going to encrypt the resulting system, here is where you would prepare /dev/sda5. I may try this in the future, but for now, you're on your own if you attempt this.

These commands add the new space to the volume group and move root to it:
# pvcreate /dev/sda5
# pvscan
# vgextend os /dev/sda5
# pvmove -i 60 -n root /dev/sda3 /dev/sda5

Then you can remove the unused physical volume:
# pvscan
# vgreduce os /dev/sda3
# pvremove /dev/sda3

Step 15. Repartition the drive one final time

Now that we no longer need partition 3, we delete it and extend the logical partition all the way.
# fdisk -cu /dev/sda
Note the starting sector of P2 when you enter 'p'.
Remove partition 3 by entering 'd', '3'.
Remove partitions 5 and 2 by entering 'd', '5', 'd', '2'.
Recreate extended partition 2 by entering 'n', 'e', '2', "", "".
Recreate logical partition 5 by entering 'n', 'l', '5', "", "".
Write changes to disk by entering 'w'.
Note: you will see a warning about ioctl being unable to reload the partition table. This is expected, since the file system is using it.

Step 16. Reboot into your final configuration.

If your reboot is to be delayed, you can buy some time by growing the root logical volume and file system to more usable sizes until you can reboot. If you can reboot now, don't bother with resizing here, as you'll be able to grow it to its final size after rebooting.
# lvresize -l 100%VG /dev/mapper/os-root
# resize2fs /dev/mapper/os-root

Step 17. Grow root to its final size.

Now is your chance to determine how much swap space you really want. If you don't need swap at all, use '-l 100%VG" to take all available space. Otherwise, leave space for a swap volume added in the next step.

Use a combination of the "-l nn%VG" and "-L +n.nG" options to bring your root volume up to the desired size. Do not shrink it unless you know for sure you're not releasing areas that your file system is using.
# pvresize /dev/sda5
# pvs
# lvresize -l 95%VG /dev/mapper/os-root
# lvresize -L +1.24G /dev/mapper/os-root
# resize2fs /dev/mapper/os-root

Step 18. Restore your swap partition

Create your swap partition and restore swap to it.
# lvcreate -n swap -l 100%FREE os

# mkswap /dev/mapper/os-swap
# swapon -a
or, to activate it automatically, use the UUID specified in your fstab file as:
# mkswap -U /dev/mapper/os-swap

Verify your swap partition is active before disabling the swap file with:
# cat /proc/swaps
# swapoff /swapfile
# rm /swapfile

Finally, edit your /etc/fstab file to remove the line starting with /swapfile.

Step 19. Breathe

You've done it! I believe this is the best way to perform this task with the least amount of down time. If you have suggestions or other thoughts, let me know.

Labels: , , , , , ,

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home