Adding Software RAID to a ProxMox VE install
Virtualization is all the rage nowadays... even more so when done efficiently, beautifully and Open Sourced! That is the case with the ProxMoxVE project. If you work with virtualization, make sure you check that project out! It is incredibly simple and powerful... oh, and Open-Source too, which means it makes you look good with your boss and it costs no money to your company!
ProxMoxVE handles both KVM paravirtualization as well as the incredibly efficient LXC-based OpenVZ. Not only that, but it also lets you centrally control several hardware node from a single interface and easily migrate virtual hosts from one hardware node to another! We've been using this at work since 2008 now and we are really happy with it...
...except that it doesn't natively support linux software raid out of the box, but being this open-source and being that ProxMoxVE is based on a standard Debian install, I set out to manually add software raid to a ProxMoxVE. Here is the documentation on the work, in case it helps anyone else out there. If this does help you, please leave me a comment.
The Problem
Add Linux software RAID1 to a ProxMoxVE install. This assumes you got 2 similar HDDs already setup in a server and that you know how to install ProxMoxVE (just plop the CD in the tray and blindly follow the prompts).
The Solution
Install Proxmox as usual onto 1 drive(/dev/sda)
Get extra packages
mdadm will ask which MD arrays are needed for the root fs. Go with the default (all)
apt-get install mdadm initramfs-tools vim
Add raid1 to /etc/modules
echo "raid1" >> /etc/modules
Regenerate initrd.img file
mkinitramfs -o /boot/test -r /dev/mapper/pve-root
Rename old img file (replace .x for whatever kernel version you are using)
mv /boot/initrd.img-2.6.x-pve /boot/initrd.img-2.6.x-pve.original
Rename new img file
mv /boot/test /boot/initrd.img-2.6.x-pve
Make sure grub is setup on both hdds
grub-install --no-floppy /dev/sda grub-install --no-floppy /dev/sdb
Change UUID for (hd0,0) on /boot/grub/menu.1st if your file has the UUID in it.
This most likely involves replacing the root UUID=XXXXX line with root hd(0,0)
vim /boot/grub/menu.1st
Change UUIDs for proper md0 devices on /etc/fstab if your file has the UUID in it.
Now do the same thing for the /etc/fstab file.
Replace the UUID=XXXXXXXXX /boot ext3 defaults 0 1 line with /dev/md0 /boot ext3 defaults 0 1
.
vim /etc/fstab
Clone the partition table from 1st drive to 2nd
sfdisk -d /dev/sda | sfdisk --force /dev/sdb
Create md devices with second drive only
mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1 mdadm --create /dev/md1 --level=1 --raid-devices=2 missing /dev/sdb2
Save new mdconf file
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
Get boot device setup
mkfs.ext3 /dev/md0 mkdir /mnt/md0 mount /dev/md0 /mnt/md0 cp -ax /boot/* /mnt/md0 umount /mnt/md0 umount /boot; mount /dev/md0 /boot sfdisk --change-id /dev/sda 1 fd mdadm --add /dev/md0 /dev/sda1
Setup data device
pvcreate /dev/md1 vgextend pve /dev/md1
this step takes a LOOOOONG time
pvmove /dev/sda2 /dev/md1 vgreduce pve /dev/sda2 sfdisk --change-id /dev/sda 2 fd mdadm --add /dev/md1 /dev/sda2
That is it!
You can monitor the array rebuild progress with the following command:
watch -n 1 cat /proc/mdstat
If for some reason mdstat reports that the raid will take days to complete, you can adjust the minimal raid rebuild rate with the following command:
echo 60000 >/proc/sys/dev/raid/speed_limit_min
The default setting of 1000 may be too slow on most situations. Again, upping this too much on a system that is in use is not advisable. Also make sure you don't put the value higher than /proc/sys/dev/raid/speed_limit_max.
After this process is completed you can reboot the system to make sure everything starts up nicely.
Easy Peasy!
-PCP
Special thanks to @brisho for turning me on to OpenVZ, ProxMoxVE a couple of years ago and for serving as my spell and linux-checker on this post...
- peter's blog
- Login or register to post comments