problem with How To Set Up Software RAID1 On A Running System

Discussion in 'HOWTO-Related Questions' started by cckid, Sep 23, 2009.

  1. cckid

    cckid New Member

    Hi there,
    I'm following the guide by Falko and thought all was going well - all until the first reboot. After the system shuts down and starts back up, I get a lot of error messages and it all culminates in a "kernel panic - not syncing: attempted to kill init!".
    These are the messages I receive:
    scanning and configuring dmraid supported devices
    creating root device.
    mounting root filesystem.
    mount: could not find filesystem '/dev/root'
    setting up other filesystems.
    setting up new root fs
    setuproot: moving /dev failed: no such file or directory
    no fstab.sys, mounting internal defaults
    setuproot: error mounting /proc: no such file or directory
    setuproot: error mounting /sys: no such file or directory
    switching to new root and running init.
    unmounting old /dev
    unmounting old /proc
    unmounting old /sys
    switchroot: mount failed: no such file or directory
    Kernel panic - not syncing: attempted to kill init!

    Any suggestions?? AFAIK, I followed the tutorial to a T and did not receive any errors while running any of the commands.
    I'm running CentOS 5.3.

    Thanks!:confused:
     
  2. falko

    falko Super Moderator Howtoforge Staff

    Did you use LVM on the original system before you tried to set up RAID?
     
  3. cckid

    cckid New Member

    Thanks Falko -
    I wiped my system clean before starting this tutorial.
    HDA and HDB's partitions were erased but before the wiping, HDB was an LVM volume.
    Does this help?
    Thanks!
     
  4. falko

    falko Super Moderator Howtoforge Staff

    Yes, the procedure is different for LVM. I've written a tutorial for this which I will publish in the next days.
     
  5. dowdle

    dowdle New Member

    I had a problem with the recipe

    I have a very similar setup to that in the tutorial except I don't have a separate /boot partition. /boot is just a directory on /. Here's my partition layout:

    [root@backup1 ~]# fdisk -l /dev/sda

    Disk /dev/sda: 73.4 GB, 73407820800 bytes
    255 heads, 63 sectors/track, 8924 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 8401 67481001 83 Linux
    /dev/sda2 8402 8923 4192965 82 Linux swap / Solaris

    So I modified the instructions in the howto to fit my needs. My second drive is the same make and model as the first (it's in a Dell PowerEdge 1950 1U unit... and two 73GB SAS drives).

    Everything goes fine until I reach the point where it is time to reboot... and when I pick the grub entry where root=/dev/md0 grub gives me an error stating that the partition table is bad so I can't boot.

    I have tried this twice and gotten the same error.

    Any idea what might be causing this?
     
  6. dowdle

    dowdle New Member

    More information

    Here is my grub.conf stanza that gives me the error:

    title OpenVZ RHEL5 Stable (2.6.18-128.2.1.el5.028stab064.7)
    root (hd1,0)
    kernel /boot/vmlinuz-2.6.18-128.2.1.el5.028stab064.7 ro root=/dev/md0
    initrd /boot/initrd-2.6.18-128.2.1.el5.028stab064.7.img

    Here's the fdisk -l output

    [root@backup1 etc]# fdisk -l

    Disk /dev/sda: 73.4 GB, 73407820800 bytes
    255 heads, 63 sectors/track, 8924 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 8401 67481001 83 Linux
    /dev/sda2 8402 8923 4192965 82 Linux swap / Solaris

    Disk /dev/sdb: 73.4 GB, 73407820800 bytes
    255 heads, 63 sectors/track, 8924 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes

    Device Boot Start End Blocks Id System
    /dev/sdb1 * 1 8401 67481001 fd Linux raid autodetect
    /dev/sdb2 8402 8923 4192965 fd Linux raid autodetect

    Here's the output of /proc/mdstat

    [root@backup1 etc]# cat /proc/mdstat
    Personalities : [raid1]
    md1 : active raid1 sdb2[1]
    4192896 blocks [2/1] [_U]

    md0 : active raid1 sdb1[1]
    67480896 blocks [2/1] [_U]

    unused devices: <none>

    Here's the contents of /etc/fstab:

    [root@backup1 etc]# cat /etc/fstab
    /dev/md0 / ext3 defaults,noatime 1 1
    devpts /dev/pts devpts gid=5,mode=620 0 0
    tmpfs /dev/shm tmpfs defaults 0 0
    proc /proc proc defaults 0 0
    sysfs /sys sysfs defaults 0 0
    /dev/md1 swap swap defaults 0 0

    Here's the contents of /etc/mtab:

    [root@backup1 etc]# cat /etc/mtab
    /dev/md0 / ext3 rw,noatime 0 0
    proc /proc proc rw 0 0
    sysfs /sys sysfs rw 0 0
    devpts /dev/pts devpts rw,gid=5,mode=620 0 0
    tmpfs /dev/shm tmpfs rw 0 0
    none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0
    sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0

    Here's the output of mount:

    [root@backup1 etc]# mount
    /dev/md0 on / type ext3 (rw,noatime)
    proc on /proc type proc (rw)
    sysfs on /sys type sysfs (rw)
    devpts on /dev/pts type devpts (rw,gid=5,mode=620)
    tmpfs on /dev/shm type tmpfs (rw)
    none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
    sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

    - - - - -

    So at this point it appears the RAID1 with 1 drive is working but I can't seem to boot unless I use the grub stanza that says "root=/dev/sda1".

    I'm not wanting to progress with the rest of the HOWTO (to join the /dev/sda1 and /dev/sda2 parts to the RAID1) until I can get this figured out.

    Any ideas? If there is any other output I could provide you that would be helpful, please let me know.
     
  7. cckid

    cckid New Member

    I'm not sure why I'm having this problem but I've followed the directions 3 times now and get the same result. I think there's a problem with my fstab file now as when I select the (now) secondary Grub kernel, it gives me an option to log in to fix problems. I can't write to fstab but it says there's an error with the line "label=/dev/md1".
    I know I'm not using LVM on this - has anyone else experienced these problems?
    Falko, could you point me in the right direction? How can I edit the fstab file? When I boot off the rescue CD, it can't find the install and it loads the rescue env. into memory. I can't even mount /dev/hda because it can't find the /etc/fstab file!
    thanks for the helps!
     
  8. dowdle

    dowdle New Member

    LABEL=/dev/md1 is wrong... unless your partition label really is named that. You want ROOT=/dev/md1 I'm guessing?
     

Share This Page