Re: How To Set Up Software RAID1 On A Running System (Debian Etch)

Discussion in 'HOWTO-Related Questions' started by ClarkVent, Oct 22, 2010.

  1. ClarkVent

    ClarkVent New Member

    About two years ago, I had setup Software Raid on my webserver (Debian Etch) using this HowTo: How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch).

    My system has always ran flawless without so much as a hickup.

    Unfortunately, security updates were discontinued as of February this year (2010). This means I really should upgrade to Debian Lenny.

    Problem is, the webserver is located in a data-center about 2 hours from where I live. If I had the server next to me, I could easily do (try) a dist-upgrade and make preparations in case anything goes wrong - like dropping in an extra HDD or downloading a live CD to boot the machine from or even temporarily routing traffic to a backup webserver. I could also easily look up stuff on the internet (on my own PC) in case I encounter something I don't know how to solve right away.

    But as it is, I either have to do a remote upgrade, or go to the data-center and do the upgrade there (but I only have physical access to my own webserver there).

    I know it should be easy. But I'm afraid to do it because in case something goes wrong which can't be solved remotely, I probably have to pick up the server from the data-center and reinstall it at home meaning my (and my clients') websites will be off-line during that period.

    But keeping Debian Etch is of course no option. The longer I wait, the more the server becomes vulnerable to attackers. I've already waited far too long.

    I'm no Linux newbie at all, but there are some things I simply don't know enough about to comfortably do this upgrade.

    The idea I had to do this upgrade, was to take the two HDDs out of RAID, and do the upgrade on one HDD. If the upgrade succeeds, I let the other HDD sync up again. If the upgrade fails, I let the HDD with the failed upgrade sync with the HDD that holds the old setup. To me, that sounds like a solid plan.

    But I really don't know how to do that. Back when I built the server, I used some HowTo to setup the software RAID and so I really don't know how to break the array and rebuilt it later on. So I was hoping people here could give me tips on how to do this upgrade.

    Or perhaps you have another idea how to do the distribution upgrade with my current hardware? Like I said, there's a third 500GB hard disk in the system I could use. As long as the end result is the same system I have now but with Debian Lenny instead of Etch...

    The reason I don't want to just blindly do a dist-upgrade is because I tried it using an image of my webserver running inside a virtual machine, and after the upgrade the (virtual) machine would no longer boot.

    So can this be done easily:
    • Take HDDs out of RAID
    • Boot from HDD 1, perform a dist-upgrade
    • If dist-upgrade fails, boot from HDD 2, recreate RAID and resync HDD 1
    • If dist-upgrade succeeds, recreate RAID and resync HDD2
     
  2. falko

    falko Super Moderator Howtoforge Staff

    Take a look at chapter 9 on http://www.howtoforge.com/software-raid1-grub-boot-debian-etch-p4 . What you need is this part:

    Code:
    mdadm --manage /dev/md0 --fail /dev/sdb1
    mdadm --manage /dev/md1 --fail /dev/sdb2
    mdadm --manage /dev/md2 --fail /dev/sdb3
    Code:
    mdadm --manage /dev/md0 --remove /dev/sdb1
    mdadm --manage /dev/md1 --remove /dev/sdb2
    mdadm --manage /dev/md2 --remove /dev/sdb3
    Then do the dist-upgrade, and afterwards run

    Code:
    mdadm --zero-superblock /dev/sdb1
    mdadm --zero-superblock /dev/sdb2
    mdadm --zero-superblock /dev/sdb3
    Code:
    mdadm -a /dev/md0 /dev/sdb1
    mdadm -a /dev/md1 /dev/sdb2
    mdadm -a /dev/md2 /dev/sdb3
    (Make sure you use the correct device names!)
     
  3. ClarkVent

    ClarkVent New Member

    Ok, that takes /dev/sdb out of the RAID array, and resyncs it after the dist-upgrade on /dev/sda.

    But what if dist-upgrade fails and the system no longer boots from /dev/sda? I then want to boot from /dev/sdb and resync /dev/sda.

    (My terminology might be a bit off, but you get my point)

    So I guess I want to:

    1) Get rid of the RAID altogether, then change grub so I can either boot from /dev/sda or /dev/sdb. Then boot from /dev/sda and try to do a dist-upgrade.

    2) If dist-upgrade fails, boot from (the unchanged) /dev/sdb and resync /dev/sda (not exactly sure how to do that; through rebuilding the RAID array again or through some other means) and try the dist-upgrade again.

    3) If dist-upgrade succeeds, rebuild the RAID array again.
     
    Last edited: Oct 24, 2010
  4. ClarkVent

    ClarkVent New Member

    So is this at all possible? And any dangers in doing it like this?
     
  5. falko

    falko Super Moderator Howtoforge Staff

    I've never tried that, so I can't tell if/how that works. :(
     

Share This Page