Apr 22

Adding Software RAID to a Proxmox VE 2.0 Install

Sun, 04/22/2012 - 16:19 — peter

Almost two years ago, I documented the process I went through to get Linux software RAID setup on Proxmox VE because Proxmox didn't officially support software RAID (which Linux does, of course). Now the fine folks at Proxmox released Proxmox VE 2.0 adding even greater features to Prox, like high-availability settings, REST api for management, etc… However, Proxmox VE 2.0 still doesn't officially support software RAID out of the box, so I figured it was time to go find out how to do it on this new version… As it turns out, due to the newer version of debian running under VE 2.0, the process is even easier than it was before! Read on and find out how.

In case you haven't read the original article, here is the common-sense disclaimer I put up for those of you out there that are more risk-prone:

WARNING: If common-sense hasn't kicked-in by now, make sure you DON'T do this on a system that is in use. Migrate your virtuals to another box or stop the virtuals and endure the downtime during the process.

Like I said above, the process is now simpler and it is reduced to 4 steps, as follows:

  1. Install required software
  2. Prepare raid devices
  3. Get /boot ready on /dev/md0
  4. Move the PVE LVM to /dev/md1

So let's get started!

1. Install required software

All you really need here is the mdadm package. I also install vim because I like it better than plain vi. On Proxmox VE 1.0 you also needed the initramfs-tools, but this is already installed on VE 2.0. So, to get your system ready, type the following on the command line of your Proxmox setup:

apt-get update; apt-get install mdadm vim

The mdadm package will prompt you for information and all you need to do on that screen is press the ENTER key.

2. Prepare raid devices

Now that you have the right software setup, let's get your raid devices ready. We will copy the partition information from /dev/sda into /dev/sdb, convert the partitions on sdb to raid members, initialize the raid devices and then save the raid configuration on /etc/mdadm/mdadm.conf so it persists after reboot. All that is done with the following 6 lines:

sfdisk -d /dev/sda | sfdisk -f /dev/sdb
sfdisk -c /dev/sdb 1 fd
sfdisk -c /dev/sdb 2 fd
mdadm --create -l 1 -n 2 /dev/md0 missing /dev/sdb1
mdadm --create -l 1 -n 2 /dev/md1 missing /dev/sdb2
mdadm --detail --scan >> /etc/mdadm/mdadm.conf 

3. Get /boot ready on /dev/md0

This is by far the step with the most number of commands, but we got quite a bit to do here… It may be possible to shorten some of these steps, but I wasn't successful on my attempts to simplify this and the instructions below worked pretty well for me. Anyway, let's begin by formatting and populating /dev/md0 with the contents of /boot.

mkfs.ext3 /dev/md0
mkdir /mnt/md0
mount /dev/md0 /mnt/md0
cp -ax /boot/* /mnt/md0

Now let's tell our system to use /dev/md0 after the Linux bootstrap (i.e.: grub2 will still use sda1 to boot for now).

vim /etc/fstab 

Replace the line that has UUID=<your UUID here> /boot ext3 defaults 0 1 with /dev/md0 /boot ext3 defaults 0 1. If you are unfamiliar with vim, use the arrow keys to navigate to the line above, hit yypi# to make a copy of the old line and then turn the copy into a comment, use the up-arrow to go to the uncommented line, delete all the UUID=bla text and add /dev/md0. After that is done, hit the ESC key, which will take you out of insert mode, then :wq followed by the ENTER key. That will save and quit the file for you. We are now done with vim! Once you are on the command line again, reboot with the following command:

After the system reboots, let's verify that it is using the md0 device as your /boot mount point by typing the following:

mount|grep boot

You should get something like this:
/dev/md0 on /boot type ext3 (rw)

Now we go on to tell grub to use that device during boot as well on the next 10 commands:
echo '# customizations' >> /etc/default/grub  
echo 'GRUB_DISABLE_LINUX_UUID=true' >> /etc/default/grub  
echo 'GRUB_PRELOAD_MODULES="raid dmraid"' >> /etc/default/grub  
echo raid1 >> /etc/modules 
echo raid1 >> /etc/initramfs-tools/modules 
grub-install /dev/sda
grub-install /dev/sdb
grub-install /dev/md0
update-initramfs -u

Maybe you don't need all 3 grub-install commands, but for me, not having the last one there didn't work and when reverting the process during one of my tests, I ended-up having to reissue the command grub-install /dev/sda.

We are almost done with this step! All that remains to do is make /dev/sda1 part of /dev/md0 and reboot using this config to make sure all is working at it should. We get that done with the following 2 commands:

sfdisk -c /dev/sda 1 fd
mdadm --add /dev/md0 /dev/sda1

This will get /dev/md0 rebuilding, and it shouldn't take long. You can verify the progress of the process with the following command:
watch -n 5 cat /proc/mdstat

Once that is completed, reboot and we are done with this step!

4. Move the PVE LVM to /dev/md1

Now the process is pretty much similar to the old one. This step is simple, but it is the one that could take the longest time to complete, depending on how big your data partition is and how fast is your system. What we need to do is vacate /dev/sda2 so we can join it to /dev/md1, and we do this with the following commands: (warning: the pvmove command can take a long time to complete, so use it on a tty or inside a screen session)

pvcreate /dev/md1
vgextend pve /dev/md1
pvmove /dev/sda2 /dev/md1
vgreduce pve /dev/sda2
pvremove /dev/sda2
sfdisk --change-id /dev/sda 2 fd
mdadm --add /dev/md1 /dev/sda2

You will get your second raid rebuilding after the last mdadm command. This will take longer than the 1st one and you can check its progress the same way as before, with:
watch -n 5 cat /proc/mdstat

However, this time it may be good to boost the limits with which the RAID subsystem can read and write to its devices. You do that with the following commands:
echo 800000 > /proc/sys/dev/raid/speed_limit_min
echo 1600000 > /proc/sys/dev/raid/speed_limit_max

Hopefully this guide will save you some time and quite possibly a lot of head-ache and frustration! If you like it or if you have a suggestion to improve on it in any way, please leave me a comment below.

Easy Peasy!