Grub for software RAID

Discussion in 'Technical' started by dipeshmehta, Oct 18, 2010.

  1. dipeshmehta

    dipeshmehta Member

    Hello,

    I have setup RAID1 during setup on Ubuntu 8.04 following http://www.howtoforge.com/how-to-install-ubuntu8.04-with-software-raid1

    Output of df -h
    Code:
    root@mailbox:~# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/md0              914G  3.5G  864G   1% /
    varrun                1.9G   52K  1.9G   1% /var/run
    varlock               1.9G     0  1.9G   0% /var/lock
    udev                  1.9G   68K  1.9G   1% /dev
    devshm                1.9G     0  1.9G   0% /dev/shm
    
    Output of cat /proc/mdstat
    Code:
    root@mailbox:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md1 : active raid1 sda5[0] sdb5[1]
          11510464 blocks [2/2] [UU]
    
    md0 : active raid1 sda1[0] sdb1[1]
          965249344 blocks [2/2] [UU]
    
    unused devices: <none>
    
    My menu.lst is
    Code:
    root@mailbox:~# cat /boot/grub/menu.lst
    [....]
    # kopt=root=/dev/md0 ro
    [...]
    
    title           Ubuntu 8.04.4 LTS, kernel 2.6.24-26-server
    root            (hd0,0)
    kernel          /boot/vmlinuz-2.6.24-26-server root=/dev/md0 ro quiet splash
    initrd          /boot/initrd.img-2.6.24-26-server
    
    title           Ubuntu 8.04.4 LTS, kernel 2.6.24-26-server (recovery mode)
    root            (hd0,0)
    kernel          /boot/vmlinuz-2.6.24-26-server root=/dev/md0 ro single
    initrd          /boot/initrd.img-2.6.24-26-server
    
    title           Ubuntu 8.04.4 LTS, memtest86+
    root            (hd0,0)
    kernel          /boot/memtest86+.bin
    
    ### END DEBIAN AUTOMAGIC KERNELS LIST
    
    ### RAID1: to boot if sda fails ###
    title           Ubuntu 8.04.4 LTD, kernel 2.6.24-26-server (/dev/sda Failed)
    root            (hd1,0)
    kernel          /boot/vmlinuz-2.6.24-26-server root=/dev/md0 ro quiet splash
    initrd          /boot/initrd.img-2.6.24-26-server
    #### End RAID1 Boot Loader ####
    
    Now, if I reboot and at grub boot options, select to boot from second harddrive it boots. But, if I remove power and data cables of first harddrive and try to boot from second harddrive, it fails giving error "Error 21: Selected disk dos not exists"

    Please guide me in the matter, thanks in advance.

    Dipesh
     
  2. falko

    falko Super Moderator Howtoforge Staff

    Did you install GRUB on both hard drives?
     
  3. dipeshmehta

    dipeshmehta Member

    I installed grub on my second harddrive with these commands:
    Code:
    grub
    >device (hd1) /dev/sdb
    >root (hd1,0)
    >setup (hd1)
    quit
    followed by
    Code:
    update-initramfs -u
    do I need to install grub with grub-install command? I am little unfamiliar with setting up grub.

    Thanks for your support all the time.

    Dipesh
     
  4. dipeshmehta

    dipeshmehta Member

    Update:

    I tried re-creating RAID by following http://www.howtoforge.com/software-raid1-grub-boot-debian-etch-p4 - Step 9 onwards, but my server do not boot off second harddrive, giving same error "Error 21: Selected disk dos not exists". I tried swapping SATA port, by connecting second harddrive to SATA1 and disconnecting another drive, but failed.

    Dipesh
     
  5. falko

    falko Super Moderator Howtoforge Staff

    You can try
    Code:
    grub-install /dev/sda 
    grub-install /dev/sdb
    but it should do the same.
     
  6. dipeshmehta

    dipeshmehta Member

    I tried installing grub on both hard drives with grub-install command.
    Code:
    root@mailbox:~# grub-install /dev/sda
    Probing devices to guess BIOS drives. This may take a long time.
    Searching for GRUB installation directory ... found: /boot/grub
    Installing GRUB to /dev/sda as (hd0)...
    Installation finished. No error reported.
    This is the contents of the device map /boot/grub/device.map.
    Check if this is correct or not. If any of the lines is incorrect,
    fix it and re-run the script `grub-install'.
    
    (fd0)   /dev/fd0
    (hd0)   /dev/sda
    (hd1)   /dev/sdb
    root@mailbox:~# grub-install /dev/sdb
    Searching for GRUB installation directory ... found: /boot/grub
    Installing GRUB to /dev/sdb as (hd1)...
    Installation finished. No error reported.
    This is the contents of the device map /boot/grub/device.map.
    Check if this is correct or not. If any of the lines is incorrect,
    fix it and re-run the script `grub-install'.
    
    (fd0)   /dev/fd0
    (hd0)   /dev/sda
    (hd1)   /dev/sdb
    
    but the result is same, not able to boot from second harddrive if first harddrive is detached. Now, it do not display any error, only one word 'GRUB' at the top of the screen is there. However if first harddrive is connected, it boots OK.

    Well, I think I should again start from the scratch. Can you please suggest which shall be the best option: to setup RAID1 during install (as guided in http://www.howtoforge.com/how-to-install-ubuntu8.04-with-software-raid1) or to install ubuntu first and then setup RAID1 (http://www.howtoforge.com/software-raid1-grub-boot-debian-etch). I am having Intel Xeon E3110 based server with two Seagate 1 TB SATA Harddrives. I would like to remain with Ubuntu 8.04 LTS, and I would run Zimbra on this server.

    Also, please guide on how to remove existing md devices, as I understand this would be little difficult and could not find suitable docs while searching.

    Dipesh
     
  7. falko

    falko Super Moderator Howtoforge Staff

    I've mostly used the second way, but in the end it doesn't matter which one you use.
    One last idea: is it possible you had GRUB2 installed instead of GRUB?

    Do this for all your RAID devices:
    Code:
    mdadm --zero-superblock /dev/sdb1
     
  8. dipeshmehta

    dipeshmehta Member

    No its not that,
    Code:
    root@mailbox:~# grub --version
    grub (GNU GRUB 0.97)
    
    Dipesh
     
  9. dipeshmehta

    dipeshmehta Member

    Hello,

    I am now setting up starting from scratch. I installed Ubuntu 8.04 with manual partitioning of 980 GB as / and 20.2 GB as swap. Then I followed guide http://www.howtoforge.com/software-raid1-grub-boot-debian-etch.

    I am getting stuck at
    Code:
    To make sure that there are no remains from previous RAID installations on /dev/sdb, we run the following commands:
    
    mdadm --zero-superblock /dev/sdb1
    mdadm --zero-superblock /dev/sdb2
    It gives error
    Code:
    root@ubuntu:~# mdadm --zero-superblock /dev/sdb1
    mdadm: Couldn't open /dev/sdb1 for write - not zeroing
    root@ubuntu:~# mdadm --zero-superblock /dev/sdb2
    mdadm: Couldn't open /dev/sdb2 for write - not zeroing
    
    Output of fdisk -l
    Code:
    root@ubuntu:~# fdisk -l
    
    Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Disk identifier: 0x0009c133
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *           1      119145   957032181   83  Linux
    /dev/sda2          119146      121601    19727820   82  Linux swap / Solaris
    
    Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Disk identifier: 0x00000000
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1   *           1      119145   957032181   fd  Linux raid autodetect
    /dev/sdb2          119146      121601    19727820   fd  Linux raid autodetect
    
    Disk /dev/md1: 20.2 GB, 20201209856 bytes
    2 heads, 4 sectors/track, 4931936 cylinders
    Units = cylinders of 8 * 512 = 4096 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/md1 doesn't contain a valid partition table
    
    Disk /dev/md0: 980.0 GB, 980000833536 bytes
    2 heads, 4 sectors/track, 239258016 cylinders
    Units = cylinders of 8 * 512 = 4096 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/md0 doesn't contain a valid partition table
    root@ubuntu:~#
    
    Please guide me, what I am doing wrong?

    Dipesh

    Edit:
    I tried to stop running array with
    Code:
    mdadm --manage --stop /dev/md0
    and then ran
    Code:
    mdadm --zero-superblock /dev/sdb1
    success this step. Now, proceeding further.
     
    Last edited: Oct 23, 2010
  10. dipeshmehta

    dipeshmehta Member

    Update:

    I have completely re-setup RAID1 as guided in http://www.howtoforge.com/software-raid1-grub-boot-debian-etch without facing any kind of error.

    My menu.lst is
    Code:
    root@ubuntu:~# cat /boot/grub/menu.lst
    [...]
    # kopt=root=/dev/md0 ro
    [...]
    
    ## ## End Default Options ##
    
    title           Ubuntu 8.04.4 LTS, kernel 2.6.24-26-server
    root            (hd1,0)
    kernel          /boot/vmlinuz-2.6.24-26-server root=/dev/md0 ro quiet splash
    initrd          /boot/initrd.img-2.6.24-26-server
    quiet
    
    title           Ubuntu 8.04.4 LTS, kernel 2.6.24-26-server
    root            (hd0,0)
    kernel          /boot/vmlinuz-2.6.24-26-server root=/dev/md0 ro quiet splash
    initrd          /boot/initrd.img-2.6.24-26-server
    quiet
    ### END DEBIAN AUTOMAGIC KERNELS LIST
    root@ubuntu:~#
    
    Output of /proc/mdstat
    Code:
    root@ubuntu:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md1 : active raid1 sda2[0] sdb2[1]
          19727744 blocks [2/2] [UU]
    
    md0 : active raid1 sda1[0] sdb1[1]
          957032064 blocks [2/2] [UU]
    
    unused devices: <none>
    root@ubuntu:~#
    
    Output of df -h
    Code:
    root@ubuntu:~# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/md0              906G  656M  860G   1% /
    varrun                1.9G   48K  1.9G   1% /var/run
    varlock               1.9G     0  1.9G   0% /var/lock
    udev                  1.9G   60K  1.9G   1% /dev
    devshm                1.9G     0  1.9G   0% /dev/shm
    root@ubuntu:~#
    
    Output of fdisk -l
    Code:
    root@ubuntu:~# fdisk -l
    
    Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Disk identifier: 0x0009c133
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *           1      119145   957032181   fd  Linux raid autodetect
    /dev/sda2          119146      121601    19727820   fd  Linux raid autodetect
    
    Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Disk identifier: 0x00000000
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1   *           1      119145   957032181   fd  Linux raid autodetect
    /dev/sdb2          119146      121601    19727820   fd  Linux raid autodetect
    
    Disk /dev/md0: 980.0 GB, 980000833536 bytes
    2 heads, 4 sectors/track, 239258016 cylinders
    Units = cylinders of 8 * 512 = 4096 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/md0 doesn't contain a valid partition table
    
    Disk /dev/md1: 20.2 GB, 20201209856 bytes
    2 heads, 4 sectors/track, 4931936 cylinders
    Units = cylinders of 8 * 512 = 4096 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/md1 doesn't contain a valid partition table
    root@ubuntu:~#
    
    Now, my system boots ok with either option selected at grub, but if I remove any of my harddrive (just by removing power and data cable), it does not boot with any option.

    Please help me.

    Dipesh
     
    Last edited: Oct 25, 2010
  11. dipeshmehta

    dipeshmehta Member

    [solved]

    Update:

    Finally, I got my setup working ok, now if I detach my first harddrive, it boots off second drive.

    I followed comment by burke3gd on first page of this howto, and instead running 'update-initramfs -u', I done dpkg-reconfigure mdadm.

    Falko, thanks to you a lot for writing such a helpful howto, as well as your continuous support all the times. Thanks to burke3gd also, as I got my setup working by your pointing few things.

    Now, there are couple of questions in my mind:
    1. During testing as I removed one of the harddrive, the RAID array became degraded. So, after connecting the drive to its place, array will not become complete. Can I make it complete just by
    Code:
    mdadm --add /dev/md0 /dev/sdb1
    or I shall need to follow complete process to change failed harddrive as guided in the said howto - step 9 onwards?

    2. At present both of my harddrives are of same capacity and model. In future, if any of of drive fails (in real), and if I do not get the new drive of same capacity, shall it work with new higher capacity drive?

    Thanks a lot once again.

    Dipesh
     
  12. falko

    falko Super Moderator Howtoforge Staff

    Yes.

    Yes.
     

Share This Page