Raid 1 on centos won't boot with 1 drive removed

Discussion in 'Server Operation' started by treeman, Aug 19, 2009.

  1. treeman

    treeman New Member

    I have just installed a fresh copy of centos 5 and created raid 1 on 2 drives.
    Both my drives are 40gb but different brands. I successfully created raid 1 by following guides, which gave me this setup:

    RAID Devices
    /dev/md0 ext3 [check mark] 100
    /dev/md1 swap [check mark] 1024
    /dev/md2 ext3 [check mark] [remaining space]

    Hard Drives
    /dev/sda
    /dev/sda1 /dev/md0 software RAID [no check mark] 100
    /dev/sda2 /dev/md1 software RAID [no check mark] 1024
    /dev/sda3 /dev/md2 software RAID [no check mark] [remaining space]
    /dev/sdb
    /dev/sdb1 /dev/md0 software RAID [no check mark] 100
    /dev/sdb2 /dev/md1 software RAID [no check mark] 1024
    /dev/sdb3 /dev/md2 software RAID [no check mark] [remaining space]

    I also instelled grub on both drives.

    So everything is good the system is up and running, but I decide to test raid and unplug 1 of the drives, trying to boot only from the remaining 1.

    I get the grub screen, then centos startup but it gives me bad superblock, a few lines with directories that can't be found and finally a kernel panic. When i replug so both drives are on system boots fine.

    What am I doing wrong? I just want to make sure the system will be able to run if one drive fails. Thank you for any help guys!
     
  2. falko

    falko Super Moderator Howtoforge Staff

  3. treeman

    treeman New Member

    Thank you for the guide, I actually managed to get it going by getting some ideas from the link.

    The main thing that stuck out to me compared to the examples was the output from
    cat /proc/mdstat

    My output differed in this line
    md2 : active raid1 sda3[0]
    4618560 blocks [2/1] [CHUNKS]

    instead of

    md2 : active raid1 sda3[0]
    4618560 blocks [2/1] [U_]

    So i decided to reinstall and looks like its working now, I can boot from either hard drive after installing grub on both disks.

    The main culprits that I can think of are:
    1. It was late at night and I might of kept the last raid device as raid0 (default) instead of raid1
    2. I installed kernel crash support on the initial installation which was also quoted in menu.lst from grub. This time I did not install kernel crash support

    Now I am getting:

    Personalities : [raid1]
    md0 : active raid1 hdb1[1] hda1[0]
    200704 blocks [2/2] [UU]

    md1 : active raid1 hdb2[1] hda2[0]
    2048192 blocks [2/2] [UU]

    md2 : active raid1 hdb3[1] hda3[0]
    36828928 blocks [2/2] [UU]

    Thanx for your help, and hope that somebody stuck in the same situation can get some info from me too.
     

Share This Page