Converting a running system to RAID 1

Discussion in 'HOWTO-Related Questions' started by AndyB78, Feb 9, 2010.

  1. AndyB78

    AndyB78 New Member

    Hello,

    I have followed the tutorial about converting a running system to RAID 1 but after copying data to the md devices, setting grub and restart I hit a kernel panic:

    sda1 is boot, sda2 is root and sda3 is swap. The running system is /dev/sda.

    =========
    device-mapper: dm-raid45: initialized v0.2594l
    Waiting for driver initialization,
    Scanning and configuring dmraid supported devices
    Trying to resume from LABEL=SWAP-sda3
    Unable to access resume device (LABEL=SWAP-sda3)
    Creating root device.
    Mounting root filesystem.
    mount: could not find filesystem '/dev/root'
    Setting up other filesystems.
    Setting up new root fs.
    setuproot: moving /dev failed: No such file or directory
    no fstab.sys, mounting internal defaults
    setuproot: error mounting /proc: No such file or directory
    setuproot: error mounting /sys: No such file or directory
    Switching to new root and running init.
    unmounting old /dev
    unmounting old /proc
    unmounting old /sys
    switchroot: mount failed: No such file or directory
    Kernel panic - not syncing: Attempted to kill init!
    =================

    I would like to mention that the only step that I missed was the "update-initramfs -u" which returned a "command not found" error I and didn't know what to replace it with.

    A little help would be welcome!

    Thanks and regards!
     
  2. falko

    falko Super Moderator ISPConfig Developer

    Which distribution do you use? What's in /etc/fstab and /boot/grub/menu.lst?
     
  3. AndyB78

    AndyB78 New Member

    Hi,

    Right...sorry about that.

    The distro is CentOS 5.4 / 32 bit.

    [root@localhost ~]# cat /etc/fstab
    /dev/md1 / ext3 defaults 0 1
    /dev/md0 /boot ext3 defaults 0 2
    /dev/md2 none swap sw 0 0
    tmpfs /dev/shm tmpfs defaults 0 0
    devpts /dev/pts devpts gid=5,mode=620 0 0
    sysfs /sys sysfs defaults 0 0
    proc /proc proc defaults 0 0

    [root@localhost ~]# cat /boot/grub/menu.lst
    # grub.conf generated by anaconda
    #
    # Note that you do not have to rerun grub after making changes to this file
    # NOTICE: You have a /boot partition. This means that
    # all kernel and initrd paths are relative to /boot/, eg.
    # root (hd0,0)
    # kernel /vmlinuz-version ro root=/dev/sda2
    # initrd /initrd-version.img
    boot=/dev/sdb
    default=0
    fallback 1
    timeout=5
    splashimage=(hd0,0)/grub/splash.xpm.gz
    hiddenmenu
    title RAID
    root (hd1,0)
    kernel /vmlinuz-2.6.18-164.el5 root=/dev/md1 ro
    initrd /initrd-2.6.18-164.el5.img
    savedefault
    title CentOS (2.6.18-164.el5PAE)
    root (hd0,0)
    kernel /vmlinuz-2.6.18-164.el5PAE ro root=LABEL=/1
    initrd /initrd-2.6.18-164.el5PAE.img
    title CentOS-base (2.6.18-164.el5)
    root (hd0,0)
    kernel /vmlinuz-2.6.18-164.el5 ro root=LABEL=/1
    initrd /initrd-2.6.18-164.el5.img

    Thanks for replying!

    Best regards!
     
  4. falko

    falko Super Moderator ISPConfig Developer

  5. AndyB78

    AndyB78 New Member

    Quote falko
    "Please try this guide instead: http://www.howtoforge.com/how-to-se...ing-system-incl-grub-configuration-centos-5.3
    The command to create a new initrd is mkinitrd on CentOS."


    Hi, thanks! I have tried to follow that tutorial and used mkinitrd a couple of days ago and it worked to a point. Unfortunately /dev/sda2 (the initial root partition) is indicated as "device busy" because if I put root=/dev/md1 in grub.conf on the kernel line it will throws me again "kernel panic" so I have to boot with root=LABEL=/1 in the kernel line in grub.conf. I have recompiled initrd and I believe it added raid support:

    [root@localhost boot]# mkinitrd -v -f --preload=raid1 --with=raid1 /boot/initrd-2.6.18-164.el5PAE.img 2.6.18-164.el5PAE
    .......
    Adding module raid1
    Adding module ehci-hcd
    Adding module ohci-hcd
    Adding module uhci-hcd
    Adding module jbd
    Adding module ext3
    Adding module scsi_mod
    Adding module sd_mod
    Adding module libata
    Adding module ahci
    Adding module dm-mem-cache
    Adding module dm-mod
    Adding module dm-log
    Adding module dm-region_hash
    Adding module dm-message
    Adding module dm-raid45

    I am choosing the second option (PAE) in grub at boot time and this is my grub.conf:

    [root@localhost boot]# cat grub/grub.conf
    # grub.conf generated by anaconda
    #
    # Note that you do not have to rerun grub after making changes to this file
    # NOTICE: You have a /boot partition. This means that
    # all kernel and initrd paths are relative to /boot/, eg.
    # root (hd0,0)
    # kernel /vmlinuz-version ro root=/dev/sda2
    # initrd /initrd-version.img
    boot=/dev/sdb
    default=0
    fallback 1
    timeout=5
    splashimage=(hd0,0)/grub/splash.xpm.gz
    hiddenmenu
    title RAID
    root (hd1,0)
    kernel /vmlinuz-2.6.18-164.el5 root=/dev/md1 ro hdb=60801,255,63
    initrd /initrd-2.6.18-164.el5-2.img
    savedefault
    title CentOS (2.6.18-164.el5PAE)
    root (hd1,0)
    kernel /vmlinuz-2.6.18-164.el5PAE ro root=/dev/md1
    initrd /initrd-2.6.18-164.el5PAE.img
    title CentOS-base (2.6.18-164.el5)
    root (hd0,0)
    kernel /vmlinuz-2.6.18-164.el5 ro root=LABEL=/1
    initrd /initrd-2.6.18-164.el5.img

    When I boot with root=LABEL=/1 it boots normally but with root=/dev/md1 it throws me a kernel panic.

    So I don't seem to be able to boot directly from a RAID mount point although I have compiled it into initrd. But if I boot from non-raid (sda2), raid is active after boot:

    [root@localhost boot]# cat /proc/mdstat
    Personalities : [raid1]
    md1 : active raid1 sdb2[1]
    51199040 blocks [2/1] [_U]

    md2 : active raid1 sdb3[1]
    4096448 blocks [2/1] [_U]

    md0 : active raid1 sdb1[1]
    104320 blocks [2/1] [_U]

    unused devices: <none>

    I am really lost here. If you can please point me into the right direction.

    Thanks!
     
    Last edited: Feb 12, 2010
  6. AndyB78

    AndyB78 New Member

    As to "root=LABEL=/1":

    [root@localhost by-label]# ls -l /dev/disk/by-label
    total 0
    lrwxrwxrwx 1 root root 10 Feb 11 21:10 1 -> ../../sda2

    And a mount:

    [root@localhost /]# mount
    /dev/md1 on / type ext3 (rw)
    proc on /proc type proc (rw)
    sysfs on /sys type sysfs (rw)
    devpts on /dev/pts type devpts (rw,gid=5,mode=620)
    /dev/md0 on /boot type ext3 (rw)
    tmpfs on /dev/shm type tmpfs (rw)
    none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
    sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
     
  7. AndyB78

    AndyB78 New Member

    After quite a lot of shooting in the dark I believe I have nailed it down (kind of).

    When I was booting option no 1 it was giving me some error when loading X11 and it was returning me to the login form. I have tried to login on console ctrl+alt+f2 but again after entering right credentials it was redirecting me to the login form. After a little bit of googling I found out it might be due to selinux so I booted with selinux=0 and right on cue it worked. I have added sda to the raid matrix and after sync even option no 2 started to work (PAE).

    So I am not sure why option no 2 was hitting a kernel panic while booting with root=/dev/md1 while option no 1 didn't...the only difference was the kernel version (normal vs PAE) which I didn't touch and the ramdisc image (which I compiled with the same options).

    If anyone has any theory as to why this was happening I am more than interested to find out.

    Anyway...thanks for answering Falko!
     

Share This Page