PermaLink Converting a Kanotix(Debian) Install to RAID1/LVM02/26/2006 09:17 PM
This simple task of getting a Linux install set up w/ RAID1 and LVM was a bit more painful than it needed it to be so I'm documenting my pitfalls here.  The most important thing to remember is that mkinitrd is deprecated on today's 2.6 kernels (2.6.8+); even the lvm2create_initrd script that google finds is deprecated.  The correct tools to use are initramfs-tools or yaird. Unfortunately, I found this out too late, so had lots of fun trying to get lvm2create_initrd to work after mkinitrd didn't figure out LVM2 was compiled into the kernel.  On the 64-bit 2.6.15 kernel, /usr/share/doc/lvm2/examples/lvm2create_initrd did not work and needs to be modified.  The mount command in busybox didn't understand the "-t auto" any more so that had to be removed.  The "$*" to pass the parameters to the init on the LVM partition needed to be changed to "$@" so that it would quote the arguments instead of blowing up.  It also needed an extra shared library (use ldd to figure out which shared libraries each command uses) and a few binaries (you have to run the boot w/ the "lvm2rescue" option so it goes into a command shell so you can figure out what's missing).  And it needed the /dev/console and /dev/null device nodes in the LVM root it was booting into so those had to be created ("mknod -m 622 console c 5 1",  "mknod -m 666 null c 1 3", "mknod -m 666 zero c 1 5" ).  The final command I used that worked was (I have two RAID1 partitions):
./lvm2create_initrd -v -c /etc/lvm/lvm.conf -r "/dev/md0 /dev/md1" -R /etc/mdadm/mdadm.conf -e "/lib64/ /sbin/lvmiopversion /bin/mknod /usr/sbin/chroot"

Time to take a step back and and describe what the goals were:

- get a Debian variant installed onto my new server motherboard (an Asus A8N CSM which supports ECC memory)

- set up RAID1 to make sure it could survive losing a drive

- set up LVM so disk expansion of various partitions is easy

The hardest part was finding a Debian install that worked on this relatively new motherboard that used nVidia's integrated C51 chipset and brought enough of it up to access the network so it could be updated.  I need an install at least the 2.6.14 kernel and chose the amd64 version for maximum performance.  Debian stable was way too old, as was the next beta version, so I had to choose a Debian unstable version.  Unfortunately, the Debian unstable version wasn't able to activate the ethernet hardware built into the system.  After trying a few LiveCDs, the only one I found that worked was Kanotix, which is a Debian unstable version of the infamous Knoppix CD.  Knoppix variants seem to do the best at autosensing the most hardware.

Kanotix (and Knoppix) can install to the hard drive, but unfortunately, don't understand how to install to a RAID1/LVM setup.  Motherboards w/ "integrated RAID" are just software RAID variants (hint: why do they need driver support in the operating system if it's truly "hardware RAID"?)  Because of the dubious driver support, I decided to use Linux' built in software RAID.  A small plain RAID1 partition is needed for bootup (grub can't boot from an LVM partition).  Another RAID1 partition is used for the rest of the data.  Linux can do something similar to Intel's "Matrix RAID" (putting RAID0/RAID1 on the same two drives), but I read that Linux crashes if the swap partition suddenly is disabled by losing a RAID0 drive so I just put /tmp and swap in a RAID1 LVM partition.

To get Kanotix onto an LVM drive, you basically follow the
xtronics and poocheireds guides.  The basic concept is the same:
- set up the 2nd drive using your md0 (for /boot) and md1 (for LVM) RAID1 partitions, but set it up as a degraded RAID1 array with a missing drive

- set up LVM partitions

- set up your Linux setup to autostart RAID and LVM (on Debian, do "dpkg-reconfigure mdadm" and "update-rc.d lvm start 26 S . stop 82 1 ."

- mount the 2nd drive's partitions into a new root location

- use rsync to synchronize/copy your Linux setup's root into the new root

- modify the /etc/fstab on the new root location

- create an initrd that understands how to boot from the LVM and add it to the grub menu.  Just set up /etc/yaird/Default.cfg to boot off "/dev/vg0/root" instead of "/"; you can examine your yaird initrd by doing "gunzip initrd.gz | cpio --extract" in a directory.

- repartition the first drive and add it to the md0 and md1 arrays

- put grub on both drives so both drives can boot

- test pulling the power cord on each drive to see if it still boots

Other useful notes:

- reiserfs, xfs, and jfs can be expanded while mounted (ext2/3 has to be unmounted first)

- ext2 and xfs have to be unmounted before shrinking; reiserfs and jfs can't be shrunk

- you can add either an entire disk or specific partitions to an LVM volume group

- if you do an LVM2 snapshot, you have to allocate enough space in the new snapshot logical volume to hold all the changes

- LILO understerstands how to boot from an LVM partition, but grub does not, so grub needs to boot from a non-LVM partition

- remember to put grub on both drives; if you moved partitions on the first drive (the non-RAID drive that you're now trying to add to the degraded RAID setup that is on the second drive), be sure to reinstall grub on the first drive after you add the first drive's /boot partition to the RAID array or it won't boot because it's looking for the grub files on the wrong partition

- even for the LVM drive, set the partition type to "0xFD" (Linux RAID autodetect) or the LVM RAID partition will not start up properly

Comments :v

1. Pravin08/03/2011 17:02:00

Try this url, helped me a lot
{ Link }

2. Alexander Scoble03/28/2006 22:40:47

I didn't get /boot to work...wouldn't even bother trying.

I got / to work in a grub/LVM environment.

Not much need to put the /boot partition in the LVM volume anyhow as you only need at most 512MB of space for the /boot partition and should find it hard to outgrow that.

As far as Ubuntu being a bastard fork from Debian...have no clue.

All I know is that it only requires one CD to install and does a good job of recognizing the hardware I've used.

Fairly easy to install and use as far as Linux flavors go, but I still needed to edit some textfiles to get things set up properly, so it's still not up there with Windows or OS X for usability.

3. Ken Yee03/25/2006 00:01:30

Alex: I haven't tried Ubuntu; isn't Ubuntu sort of a bastardized Debian fork? I wanted to stay w/ a true Debian distribution as much as possible (Kanotix is a true Debian sid variant whereas Knoppix is a mix of testing and sid).
You're lucky if you got /boot to work in an LVM. From what I dug up, grub currently can't boot from an LVM partition, but LILO can.

4. Alexander Scoble03/24/2006 19:08:07

Hi again Ken,

Have you tried Ubuntu? It should be relatively straight forward to get it set up on a RAID 1 array with LVM.

Also, it is possible to boot using Grub with your root directory in an LVM managed volume. The current system that I'm messing with is set up as such with both Ubuntu and Gentoo.

Ubuntu is easy to set up in this manner, Gentoo is a bit harder, but it is possible to do it using the documentation provided on the site and related Wiki.

Just remember to use Genkernel to build your kernel from sources. I never got Gentoo and Grub to boot properly with root in LVM volume when I manually built the kernel and init image.

5. Hubert03/11/2006 19:22:44

reiserfs most certainly *can* be "shrunk" while unmounted. This is not a terribly new development.

Start Pages
RSS News Feed RSS Comments Feed CoComment Integrated
The BlogRoll
June 2024
Contact Me
About Ken
Full-stack developer (consultant) working with .Net, Java, Android, Javascript (jQuery, Meteor.js, AngularJS), Lotus Domino