I am following the how to on setting up software raid1 on a running lvm system. However, I ran tinto an error when doing install for mdadm Here is the line I copyied in aptitude install initramfs-tools mdadm I never got the following prompt MD arrays needed for the root file system: <-- all I did get the following error on the install Generating array device nodes... /var/lib/dpkg/info/mdadm.postinst: line 27: /dev/MAKEDEV: No such file or directory failed I tried reinstalling and here is the output of the install aptitude reinstall initramfs-tools mdadm Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following packages will be REINSTALLED: initramfs-tools mdadm 0 packages upgraded, 0 newly installed, 2 reinstalled, 0 to remove and 14 not upgraded. Need to get 88.5kB/325kB of archives. After unpacking 0B will be used. Writing extended state information... Done Get:1 http://us.archive.ubuntu.com/ubuntu/ lucid/main initramfs-tools 0.92bubuntu78 [88.5kB] Fetched 88.5kB in 0s (90.4kB/s) Preconfiguring packages ... (Reading database ... 67691 files and directories currently installed.) Preparing to replace initramfs-tools 0.92bubuntu78 (using .../initramfs-tools_0.92bubuntu78_all.deb) ... Unpacking replacement initramfs-tools ... Preparing to replace mdadm 2.6.7.1-1ubuntu15 (using .../mdadm_2.6.7.1-1ubuntu15_i386.deb) ... * Stopping MD monitoring service mdadm --monitor [ OK ] Unpacking replacement mdadm ... Processing triggers for man-db ... Processing triggers for ureadahead ... Setting up initramfs-tools (0.92bubuntu78) ... update-initramfs: deferring update (trigger activated) Setting up mdadm (2.6.7.1-1ubuntu15) ... Generating array device nodes... /var/lib/dpkg/info/mdadm.postinst: line 27: /dev/MAKEDEV: No such file or directory failed. Removing any system startup links for /etc/init.d/mdadm-raid ... update-initramfs: deferring update (trigger activated) update-rc.d: warning: mdadm start runlevel arguments (2 3 4 5) do not match LSB Default-Start values (S) update-rc.d: warning: mdadm stop runlevel arguments (0 1 6) do not match LSB Default-Stop values (0 6) * Starting MD monitoring service mdadm --monitor [ OK ] Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.32-22-generic-pae Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Loading one of the modules modprobe md FATAL: Module md not found. From looking at it it looks like drive sdb is being used and one of the many time I rebuilt the server before getting it right I loaded sda So right now only 1 disk sdb is being used and sda has an old setup partition on it(I think). Not sure if this is related to the error?
fdisk -l Disk /dev/sda: 160.0 GB, 160000000000 bytes 255 heads, 63 sectors/track, 19452 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc796c701 Device Boot Start End Blocks Id System /dev/sda1 * 1 19421 155999151 8e Linux LVM /dev/sda2 19422 19452 249007+ 5 Extended /dev/sda5 19422 19452 248976 83 Linux Disk /dev/sdb: 160.0 GB, 160000000000 bytes 255 heads, 63 sectors/track, 19452 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003ca4d Device Boot Start End Blocks Id System /dev/sdb1 1 32 248832 83 Linux Partition 1 does not end on cylinder boundary. /dev/sdb2 32 19453 155998209 5 Extended /dev/sdb5 32 19453 155998208 8e Linux LVM vgdisplay --- Volume group --- VG Name mail System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 148.77 GiB PE Size 4.00 MiB Total PE 38085 Alloc PE / Size 23841 / 93.13 GiB Free PE / Size 14244 / 55.64 GiB VG UUID F5nrg3-93xk-4UKa-751l-0OY5-ClFF-M4Xi19
Which distribution do you use? Which tutorial (URL) did you follow? Is this a physical system or a virtual machine?
http://www.howtoforge.com/perfect-se...nx-ispconfig-2 I used the above configuration that you suggested since I only needed system users. The server is a physical machine.
http://www.howtoforge.com/how-to-set-up-software-raid1-on-a-running-lvm-system-incl-grub-configuration-debian-lenny The above url I followed
ok I did not install every part of The_Perfect_Server_-_Ubuntu_Lucid_Lynx_(Ubuntu_10_04)_[ISPConfig_2] I only installed the following parts as I only needed system users and did not need the other stuff. I wanted to make sure you knew which peices of the install I did if that helps The base system to 11 Install Some Software after the above I skipped to 14 MySQL 15 Postfix With SMTP-AUTH And TLS 16 Courier-IMAP/Courier-POP3 I stopped after 16
I've just finished that tutorial and will publish it soon. A few things are different because Ubuntu 10.04 uses GRUB2 instead of GRUB.
ok following your how to -How_To_Set_Up_Software_RAID1_On_A_Running_LVM_System_(Incl__GRUB2_Configuration)_(Ubuntu_10_04) I loaded my system initially on sdb so the instructions were just reversed. On page 25 Now we reboot the system and hope (pray)that it boots ok from our RAID arrays: reboot once rebooted got the following error error:file not found error:you need to load the kernel first failed to boot both defaults & fallback entries Press any key to continue I tried putting in the Unbunttu 10.04 boot disk and tried a rescue, but I was not sure what to do and went thru and got to a shell and then exited as I did not know what to edit? Help looks like I might be reloading system again for the 5 or 6 th time thanks really do appreciate all your help.
You can try to rescue your system with Knoppix. This tutorial might give you the idea: http://www.howtoforge.com/how-to-reset-a-forgotten-root-password-with-knoppix
I tried using gparted to repair the system and was unable to do so. So I went ahead and blew away both drives and now have a fresh reloaded system. I may try the raid distribution for Ubunto 10.04 again before I spend time configuring system. Thanks again for all your help
falko do you have any idea why the system blew up like it did? I would like to have drives mirrored in case one fails and would be willing to try the raid distribution for Ubuntu 10.04 again. But I would like maybe some ideas if possible on what went wrong? Thanks
Not sure, but maybe you mixed up drives somewhere along the road, or your grub or /etc/fstab configuration was wrong.