HOWTO: SUSE 10.0 and Software RAID a.k.a FakeRAID

Discussion in 'HOWTO-Related Questions' started by crushton, Dec 13, 2005.

  1. crushton

    crushton New Member

    Motivation: Recently purchased another hard drive to compliment my existing hard drive in hope of using a BIOS Software RAID 0 (via the VIA chip) config with SUSE 10.0. This turned out to be a "no-go". 2.6 kernels apparently no longer supported BIOS fakeraid setups. So, I rummaged through all the forums that even remotely discussed dmraid or RAID in general. Eventually I came across 2 howto's: one was for Gentoo and the other for Ubuntu/Kubuntu. Neither provided enough info to get SUSE up and running. Of course, this would all be unnecessary if VIA Tech had simply made the Linux drivers as promised by the end of November. This did not happen, so I was on my own to find a way to "make" SUSE work. Thus, I present the consequence of my labour in the attached doc file. I hope it helps you to get SUSE up and running as it did me. If not, post a message here and tell me what went wrong. I'll try my best to help. Regards...C.R.

    EDIT: See below. I have attached an Open Document File (odt) and reformatted the howto and posted here for quick reference if you do not wish to download anything. Enjoy!
     

    Attached Files:

    Last edited: Dec 14, 2005
  2. falko

    falko Super Moderator ISPConfig Developer

    Could you make a PDF out of the doc file and post it here? :) Or simply post the content of the file here?
     
  3. crushton

    crushton New Member

    How about an Open Document File ? PDF is too large and exceeds my upload limit for these forums =( If I post the content...all the formating will be lost unless I reformat it for the forums, which will take quite a while. Hmm, well I guess I will do both (ODT and post content). Sorry that I used doc, at the time I was just trying to get the file size down.
    Hope this is sufficient...regards C.R.
    *********************************************************
    HOWTO: SUSE 10.0 and Software RAID a.k.a FakeRAID
    A Complete Guide by C. R.

    Due to the nature of SUSE 10.0, this how-to is rather long, but necessary in order to get SUSE installed and running correctly without a hitch. Also, this how-to was devised using BIOS software RAID 0, while others may work by following this guide, you are on your own if they don't.

    Also, while I am sure there are quicker methods of reaching the same goal (i.e. if you have a spare disk a few of the steps listed can become unnecessary if other changes are made etc), I have purposefully left them out as this guide is designed to be as generic as possible. Other than that, read carefully, send me a post if you have any questions and good luck!

    Prerequisites:

    1. One of the following software RAID chip sets:
    Highpoint HPT37X
    Highpoint HPT45X
    Intel Software RAID
    LSI Logic MegaRAID
    NVidia NForce
    Promise FastTrack
    Silicon Image Medley
    VIA Software RAID​

    2. A working SUSE 10.0 installation and the original installation CD/DVD (this guide assumes KDE as the GUI and does not contain any information regarding Gnome or the like). Also, this working installation of SUSE should be installed on a plain hard drive with no Linux software RAID or LVM enabled. Make sure it is formated with the defaults presented during the original installation onto a single disk.
    3. Access to another PC via FTP, a spare hard drive (one which is not included in the RAID), 2 CD/DVD drives (one of which must be a burner), or some type of removable storage (i.e. USB drive etc, keep in mind however, about 1 GB of extra space will be required depending on the installation options you choose for SUSE 10.0)
    4. The latest source for dmraid which can be obtained from http://people.redhat.com/~heinzm/sw/dmraid/src/ (as of this writing, latest = 1.0.0.rc9). You'll want to keep the dmraid Internet address handy throughout this guide, so it would be best to write it down on a piece of paper.
    5. A Gentoo LiveCD (because it's quick and easy to use =P ) for your machine (i.e. if you have Intel x86 get the latest x86 version or x86_64 if you have an AMD64 etc). Also, you should have a wired Ethernet card, unfortunately getting a wireless card to work with any distros LiveCD is next to impossible. If you have both wired an wireless, use the wired for Gentoo and do things as you normally would when the new SUSE install is about to be booted.
    6. The originally installed kernel (i.e. 2.6.13-15-default) currently installed in your running SUSE 10.0 installation. If you updated to the new patch 2.6.13-15.7-default, then you will have to use YaST to downgrade to the original.

    The Procedure:

    Step 1 – Installing the new SUSE 10.0 system

    Boot SUSE 10.0 and log into KDE
    Insert the SUSE 10.0 CD1 or DVD disk into your drive
    Start the YaST Control Center
    Under Software, choose Installation into Directory
    Click on Options and choose a Target Directory or leave as the defaut
    Check Run YaST and SuSEconfig on first boot
    DO NOT check Create Image
    Click Accept
    Click on Software and make your software choices
    Click Accept
    Click Next​

    The new system is being installed into the directory (default = /var/tmp/drinstall) and may take some time depending on your software choices.
    When the installation is nearly complete, YaST will complain about installation of the kernel. This can be safely ignored, as the mkinitrd is what is actually failing and we must make our own anyway.

    Step 2 – Preparing the new SUSE install for RAID (i.e. hacking it)

    Make a directory on your desktop and call it backup, then copy and paste the following files/folders to it:

    /boot (this is a directory...duh!)
    /sbin/mkinitrd (script file – the one that failed earlier during install)
    /etc/fstab (mounted file system file – or rather what should be mounted during boot)
    Now, open the original /sbin/mkinitrd in Kate with root permissions so it can be modified.
    Select View->Show Line Numbers from Kate's menu.
    At line 1178, insert the following exactly:
    Code:
            # Add dmraid
            echo "Adding dmraid..."
            cp_bin /sbin/dmraid $tmp_mnt/sbin/dmraid
    
    Make sure to have an empty line above and below the new code.
    At line 1971, insert the following exactly:
    Code:
            cat_linuxrc <<-EOF
            |# Workaround: dmraid should not probe cdroms, but it does.
            |# We'll remove all cdrom device nodes till dmraid does this check by itself.
            |for y in hda hdb hdc hdd hde hdf hdg hdh sr0 sr1 sr2 sr3;
            |do
            |       if (grep -q "$y" /proc/sys/dev/cdrom/info)
            |       then
            |               rm -f /dev/"$y"
            |       fi
            |done
            |# Now we can load dmraid
            |dmraid -ay -i
            EOF
        echo
    

    NOTE: This is VERY IMPORTANT! The spaces before the | character are tabs and MUST be tabs.


    Make sure to have an empty line above and below the new code.
    At line 2927, insert the following exactly:
    Code:
            # HACKED: prevent LVM and DM etc from being detected
    Now, comment out (i.e. place a # character at the beginning of the line, like the code you just inserted) all line numbers from 2929 to 2941.
    Save the file.

    This next part requires gcc to be installed on your system, so run sudo yast -i gcc gcc-c++ at a command line if you do not already have it installed.
    Download the latest version of dmraid from the web address listed above in the prerequisites section. Also, be sure to download the one with tar.bz2 as the extension. Extract it to your desktop. Find the file ~/tools/Makefile.in within the extracted folder and open it in Kate. Remove line number 36 or comment it out as mentioned above with a # character. Then in a terminal cd to your desktop and the newly extracted dmraid directory - with root permissions (i.e. type su – ). While in the directory that lists the configure script file,type:
    Code:
    	./configure
    	make
    	cp -f tools/dmraid /sbin/dmraid
    	vi /etc/sysconfig/kernel
    
    Near the top of the file, from the last command, there should be a line that looks similar to this:
    Code:
    INITRD_MODULES="sata_via via82cxxx reiserfs processor thermal fan”
    
    Write the information within the quotes on a piece of paper, then type, just before the last quote dm-mod. In vi, to edit a file, press Ins on your keyboard, once modified, press Esc, Shift + ; then w and finally Shift + ; then q to quit.

    Back at the command prompt, type mkinitrd. If all goes well, you should see Adding dmraid... and a bunch of other messages that don't say error. We should now have a new initrd/initramfs located in the /boot directory, in fact it replaced the one that was there originally. Copy this new file to your new SUSE installation by issuing the following command:
    Code:
    	cp /boot/initrd-2.6.13-15-default your-new-suse-installation-directory/boot/ initrd-2.6.13-15-default 
    
    Copy some other needed files to the new system:
    Code:
    	cp /boot/initrd your-new-suse-installation-directory/boot/initrd
    	cp /sbin/dmraid  your-new-suse-installation-directory/sbin/dmraid
    	cp /sbin/mkinitrd your-new-suse-installation-directory/sbin/mkinitrd
    	cp /etc/sysconfig/kernel your-new-suse-installation-directory/etc/sysconfig/kernel
    	cp /etc/fstab your-new-suse-installation-directory/etc/fstab
    
    Copy and paste your /boot/grub directory over to your-new-suse-installation-directory/boot directory. You will need root permissions to do this, so use File Manager – Super User Mode if necessary.

    Step 3 – Archiving and storing the new SUSE installation

    Navigate using the File Manager – Super User Mode and go to the new SUSE installation directory. Select all the directories contained within, right-click and choose Compress->Add to Archive... . In the new window change Location to the directory and filename you want and Open as to Gzipped Tar Archive. This may take a while...

    Once finished, copy your-new-suse-installation-archive.tar.gz to whatever medium you like. As long as it will be retrievable once your RAID hard drives have been wiped clean. For example, copy it to a CD/DVD disc if you have 2 or more CD/DVD drives, or to a spare hard drive that will not be included in the RAID, or, in my case, I had to ftp it to a remote computer running Windows XP (sad but true). Originally, I didn't compress the archive and it was 2GB and oddly, Windows wouldn't allow it to be retrieved by ftp afterwards, however, once compressed down to less than 1GB, no problem...just one of the many reasons why I now use Linux!.
     

    Attached Files:

    Last edited: Dec 14, 2005
  4. crushton

    crushton New Member

    Step 4 – Setting up the RAID and restoring the new SUSE installation onto it

    Make sure you have a running wired Internet connection, place the Gentoo LiveCD into your drive, reboot and change the BIOS accordingly to boot from CD and setup your RAID in it's BIOS to configure your RAID disks. At the boot: prompt just hit Enter and for every option thereafter until you get to the Gnome desktop.
    Download the dmraid source, like you did before, to the Gnome desktop. Extract it to the desktop, navigate to the extracted directory via a command terminal window using root permissions. This is done by typing sudo su – at the command prompt in the terminal window.
    Compile the source in the same manner as before (you will have to modify the ~tools/Makeconfig.in file once again, you can use vi this time, now that you know how):
    Code:
    vi extracted-dmraid-directory/tools/Makeconfig.in
    
    After editing the line in Makeconfig.in type:
    Code:
    ./configure
    make
    modprobe dm-mod
    tools/dmraid -ay -i
    ls /dev/mapper
    
    Your output should resemble something like:
    Code:
    control via_ebfejiabah
    
    The important file (or more correctly known as a device node) is the one that begins with via_. It will have a different prefix depending on your RAID hardware. Make note of it, but for simplicity I will use via_ebfejiabah and you should substitute it with yours. Now type:
    Code:
    fdisk /dev/mapper/via_ebfejiabah
    
    Setup at least 2 partitions with fdisk, one type 82 for your swap and the other type 83 for your main SUSE installation. Refer to the fdisk help (m for help) for info on what to do. Afterwards and before writing the partition tables and exiting fdisk, type p to get the partition tables. Your output might look something like this:
    Code:
    Command (m for help): p
    
    Disk /dev/mapper/via_ebfejiabah: 163.9 GB, 163928603648 bytes
    [b]255[/b] heads, [b]63[/b] sectors/track, [b]19929[/b] cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    
                         Device Boot      Start         End      Blocks   Id  System
    /dev/mapper/via_ebfejiabah1               1         125     1004031   82  Linux                     swap / Solaris
    /dev/mapper/via_ebfejiabah2             126       19929   159075630   83  Linux
    
    The important parts of the output have been bold typed in the above listing, make note of them on your output (i.e. heads=?, sectors=? and cylinders=?). We will need them later.
    You may now write the partition table and quit fdisk. You must now reboot and start the LiveCD again following everything in this step again excluding the initial RAID BIOS setup and upto the point of where we begin to use fdisk. We don't need to setup the partitions again. Gain access to your-new-suse-installation-archive.tar.gz by either mounting the spare disk, mount the CD drive or using ftp etc etc. Remember to mount a volume type:
    Code:
    mkdir /mnt/your-mount-point
    mount -t your-volumes-filesystem /dev/your-device /mnt/your-mount-point
    
    If using ftp, like I had to, use Gnome to Connect to Server and it will mount the ftp directory on the desktop. Now we must format the new partitions and extract our new installation onto the root partition. Type the following:
    Code:
    mkswap /dev/mapper/[b]via_ebfejiabah1[/b]
    mkreiserfs /dev/mapper/[b]via_ebfejiabah2[/b]
    mkdir /mnt/suse10
    mount -t reiserfs /dev/mapper/[b]via_ebfejiabah2[/b] /mnt/suse10
    
    Of course you'll want to replace anything listed in bold above to your specific settings/info. Copy your-new-suse-installation-archive.tar.gz to /mnt/suse10. Extract, using tar at the command prompt.
    For example:
    Code:
    cd /mnt/suse10
    tar --preserve -xf your-new-suse-installation-archive.tar.gz
    
    This will take a while...then:
    Code:
    rm your-new-suse-installation-archive.tar.gz
    vi etc/fstab
    
    In vi change your root device to /dev/mapper/your-root-partition and your swap device to /dev/mapper/your-swap-partition. (i.e. mine were via_ebfejiabah2 and via_ebfejiabah1 respectively)

    Step 5 – Making GRUB work with RAID

    First we need to modify some files in the /mnt/suse10/boot/grub directory using vi. Type the following:
    Code:
    cd /mnt/suse10/boot/grub
    vi device.map
    
    The structure of the device.map file is fairly simple. Just make sure that each entry corresponds to your new drive layout. For example:
    Code:
    (hd0) /dev/mapper/your-raid-device
    
    Save the changes then edit the Grub menu:
    Code:
    vi menu.lst
    
    My menu reads as follows:
    Code:
    # Modified by YaST2. Last modification on Sun Dec 11 20:40:40 UTC 2005
    
    color white/blue black/light-gray
    default 0
    timeout 5
    gfxmenu ([b]hd0,1[/b])/boot/message
    
    ###Don't change this comment - YaST2 identifier: Original name: linux###
    title SUSE LINUX 10.0
        root ([b]hd0,1[/b])
        kernel /boot/vmlinuz root=/dev/mapper/[b]via_ebfejiabah2[/b] vga=0x31a selinux=0    resume=/dev/mapper/[b]via_ebfejiabah1[/b]  splash=silent showopts
        initrd /boot/initrd
    
    ###Don't change this comment - YaST2 identifier: Original name: failsafe###
    title Failsafe -- SUSE LINUX 10.0
        root ([b]hd0,1[/b])
        kernel /boot/vmlinuz root=/dev/mapper/[b]via_ebfejiabah2[/b] vga=normal showopts ide=nodma apm=off acpi=off noresume selinux=0 edd=off 3
        initrd /boot/initrd
    
    The necessary changes have been bold typed, change to your configuration appropriately. Now we install the grub MBR on our disk so it finds and boots SUSE – or more correctly the kernel and initrd/ramfs.
    When using grub, we must know the partition layout of our disks. In the example I am about to express, my partitions were setup as displayed by the fdisk output mentioned above in step 4. My root partition for Linux/SUSE was my second partition, thus, when using grub, I have to refer to that partition as (hd0,1), whereas, (hd0,0) would refer to the first rather than the second. Also, (hd0) refers to the first disk assuming you installed your RAID as the first 2 or more disks. I assume you get the idea. Just make sure the numbers correspond to your particular setup when typing in the details below. Type the following in a terminal with root permissions (i.e. sudo su -):
    Code:
    grub
    
    At the grub prompt type:
    Code:
    device ([b]hd0,1[/b]) /dev/mapper/[b]via_ebfejiabah2[/b]
    device ([b]hd0[/b]) /dev/mapper/[b]via_ebfejiabah[/b]
    
    This is where we need the fdisk info recorded earlier. Replace the numbers bold typed with yours:
    Code:
    geometry ([b]hd0[/b]) [b]19929 255 63[/b]
    root ([b]hd0,1[/b])
    setup ([b]hd0[/b])
    
    You should now get an output saying some stuff, but nothing referring to errors. Thus all is well so far.

    Step 6 – Booting the new SUSE installation

    At this point the new installation is ready to be booted. Just make sure your BIOS settings are configured for booting from your RAID disk setup and you should probably disable boot from CD. Assuming everything worked, a familiar SUSE boot screen should appear and naturally SUSE should begin the boot process. On first boot, SUSE will start YaST. We selected this option earlier during the installation of SUSE and is required to properly setup the new system. Just follow the instructions and do what you normally would during SUSE installation. The only significant difference is YaST is displayed in terminal mode, rather than GUI. Otherwise, it is identical to the GUI counterpart. Once YaST has completed, the system defaults to terminal mode.
    You will need to edit the /etc/inittab file in order to to boot into graphical mode by default. This is rather simple, at the command prompt type the following:
    Code:
    vi /etc/inittab
    
    And then find the line that says:
    Code:
    d:[b]3[/b]:initdefault:
    
    Change the bold typed number to a 5, save the file, exit and reboot.

    DONE...Have fun!
     
    Last edited: Dec 14, 2005
  5. crushton

    crushton New Member

    Just to be on the safe side...have a look at the attached mkinitrd. Yours should be identical. You can either just use mine or follow the directions to do it yourself. I recommend that you try yourself however =)
    Also, just incase the question will be asked, which I am sure someone intuitive enough will, the reason for the commented out lines near the end of the file relating to LVM are required is this...

    If you ever plan on updating your kernel (i.e. through YOU the online updater), which of course is highly recommended considering the bug fixes, then SUSE will try to rebuild the initrd image. This is not good news without these lines commented out. Basically, SUSE will assume you have LVM partitioned disks because it detects the use of the device-mapper and isn't aware that we are using it for our own purposes which currently are not supported. Therefore, we are preventing SUSE from making this false assumption of our disk layout and thus retaining our forced setup allowing mkinitrd to fly-by not knowing any different. With this being said, it may also be a good idea to backup your modified mkinitrd script in the unfortunate event that a future SUSE update replaces it. However, if this happens, chances are they added something new to the boot process that is necessary in the initrd...to be on the safe side, always read the updates YOU is providing, and don't be too hasty accepting the updates unless your sure this critical file is not being replaced.

    Don't forget to change the permissions on this file after downloading it, only root should have access to write!

    Regards...C.R.
     

    Attached Files:

  6. mshah

    mshah New Member

    Need help - boot from IDE, can't see RAID voulumes

    1.Have 1 IDE that hosts SUSE 10, XP and have other partition.
    2. Then have 4 x 250Gig SATA drives on Intel mother board with Intel software raid.
    3. Have created 3 volumes/partitions on SATA drives. First one is 250 MB Raid1 on first 2 drives, then on later 2 drives created 215 MB Raid1 and 70 MB Raid0 partitions.

    Now the problem description:
    I can use all 3 RAID volumes correctly on XP. However, when I boot SUSE, do not see RAID0 volume at all. See Raid1 volumes as unbound (4 volumes v/s 2). This happens before I tried attached how-tos and without using dmraid.

    Tried to follow instructions posted here for 2 days, made adjustments as suggested and considered that I'm not booting from RAID drive so it should be simpler, but it didn't help. I must be doing someting wrong.

    Any help would be appreciated. I'm linux newbee so please consider that.
     
  7. till

    till Super Moderator Staff Member ISPConfig Developer

    As far as i know, the SATA raid controllers that are available as onBoard controllers currently where not supported by linux.
     
  8. mshah

    mshah New Member

    I thought that this thread and how-tos address how to make linux work with those STAT (fake) raids. Are you sure that STAT raids will not work with linux ?
     
  9. till

    till Super Moderator Staff Member ISPConfig Developer

    Yes, this thraed is how to make fake raids.

    You see in windows one raid volume, because there exist drivers for windows.
    On linux you see the single harddisks, thats because there are no linux RAID drivers for SATA available for your controller.

    That explains why you see 4 vs. 2 volumes.

    If you explain the errors you get a bit more detailed, we can try to fix them.
     
    Last edited: Jan 4, 2006
  10. Dieda2000

    Dieda2000 New Member

    waiting to appear ...

    Hi,
    Nice guide, works almost like a charm.
    Apart from the fact that every third or fourth boot my machine hangs while displaying the code:
    ".. waiting for /dev/mapper/sil_afbieacedhaj2 to appear ..."
    As said, the other times it works.

    Moreover, while booting there is alway the message
    " grep: command not found"
    How did you use grep in this early stage of booting?

    Specs: Suse 10.0 x86-x64, A8n-SLI Prem, pcie-Conroller Sil 3132
    kernel: 2.6.15-rc6-smp

    Another Notation:
    Silicon Image´s Raid-Controller like the 3132 or 3114 can use a certain mixed mode raid, like
    Intels matrix raid of the ich6 oer ich7. For example, I use two Maxtor 6V300F0, created on the first 200Gb of each disk a raid0-array and on the remaining 100Gb of each disk a raid1-array. I can use it with windows but dmraid can only discover the first raid array.
    I think its a nice feature. Any clues to make it discover?
     
  11. falko

    falko Super Moderator ISPConfig Developer

    Is grep installed? Run
    Code:
    which grep
    to find out.
     
  12. mshah

    mshah New Member

    Till - thanks for the repsonse. As I explained, I don't see any error. Only thing I see is that one of the RAID0 volume is not visible while other RAID1 volumes are visible as unbound. So can't use them. Should I be attaching some file from the computer so that we can find out what's going on ? Let me know where to look for boot log file or any other file and I'll attach it here. Again, thanks for your help.
     
  13. joek9k

    joek9k New Member

    making software RAID work in Linux

    I managed to trick Fedora Core 4 into using my Silicon Image (SIL) SATA controller for RAID 1 (mirroring) by first configuring the software RAID 1 in SuSE 10 on fresh install
    md0 mounted /
    md1 mounted /home
    md2 mounted /swap

    The funny part is the whole reason why I then took these partitions into Fedora Core 4 was because after doing the install with SuSE 10 and formatting the drives, installing everything, SuSE 10 kernel panicked upon first reboot.

    So I put Fedora core 4 in there, DiskDruid picked up the partition info and then I installed it and it gave me a warning message that you'd have to see to believe but after Fedora installed I could definately hear the drives working as a Software RAID (the sound of the configuration is a dead give-away, it's like an echo, same as a recent XP pro install I did) . So it worked, but it didn't work in my O/S of choice (SuSE 10)

    Another thing is I had to go and purchase the Silicon Image controller (PCI) for 40 bux (a software raid controller) which makes me want to take back my SATA drives and just get a couple IDE drives and do a software IDE raid and save all the effort.

    Now that I see how much BS the software raid is I'm thinking that a 3ware hardware raid controller with true Linux support and a Server motherboard with PCI 64bit is probably worth the money because my time is worth a lot more than all this BS. Anyone selling a server motherboard for cheap? :)

    It'd be nice if all this bios raid software raid worked right now and saw one drive instead of two. I've been searching for other distros but I think it'd be cheaper to just get a real server instead of trying to turn a $50 dollar motherboard into a real server. Maybe in Kernel 2.7.
     
  14. markes

    markes New Member

    Nice Howto, it´s works fine, but some peculiar things i have noticed:

    - after boot and login in KDE
    in MyComputer/media floppy is mounted always althougt i unmounted it.
    media:/ -Konqueror
    /Diskette (fd0)
    /DVD (dvdram)
    /Festplatte (mapper/via_bdeaacdjjh1)
    /Festplatte (mapper/via_bdeaacdjjh10)
    /Festplatte (mapper/via_bdeaacdjjh4)
    /Festplatte (mapper/via_bdeaacdjjh5)
    /Festplatte (mapper/via_bdeaacdjjh6)
    /Festplatte (mapper/via_bdeaacdjjh7)
    /Festplatte (mapper/via_bdeaacdjjh8)
    /Festplatte (mapper/via_bdeaacdjjh9)

    - If i logout and then login as different or same user without reboot i will get:
    MyComputer/media:/ -Konqueror
    /8.4G Medium
    /Diskettenlaufwerk
    after clicking 8.4G Medium i get the message:
    Could not mount device.
    The reported error was:
    mount: can't find /dev/sda1 in /etc/fstab or /etc/mtab

    - some warnings and errors in /var/log/messages and /var/log/boot.msg
    my /var/log/messages, the warning: grep not found too
    ...
    Mar 14 10:31:41 linux kernel: attempt to access beyond end of device
    Mar 14 10:31:41 linux kernel: sda: rw=0, want=312581850, limit=312581808
    Mar 14 10:31:41 linux kernel: printk: 807 messages suppressed.
    Mar 14 10:31:41 linux kernel: Buffer I/O error on device dm-0, logical block 312581804
    ...
    Mar 14 10:32:01 linux kernel: bootsplash: status on console 0 changed to on
    Mar 14 10:32:01 linux hal-subfs-mount[6327]: By hald-subfs-mount created dir /media/floppy got removed.
    Mar 14 10:32:01 linux kernel: printk: 90 messages suppressed.
    Mar 14 10:32:01 linux kernel: Buffer I/O error on device sda4, logical block 8241312
    ...
    Mar 14 10:32:02 linux hal-subfs-mount[6338]: MOUNTPOINT:: /media/floppy
    Mar 14 10:32:02 linux kernel: subfs 0.9
    Mar 14 10:32:02 linux hal-subfs-mount[6338]: Collected mount options and Called(0) /bin/mount -t subfs -o fs=floppyfss,sync,procuid,nosuid,nodev,exec /dev/fd0 "/media/floppy"
    Mar 14 10:32:02 linux kernel: end_request: I/O error, dev fd0, sector 0
    Mar 14 10:32:02 linux submountd: mount failure, No such device or address
    Mar 14 10:32:02 linux kernel: end_request: I/O error, dev fd0, sector 0
    Mar 14 10:32:02 linux kernel: subfs: unsuccessful attempt to mount media (256)

    /var/log/boot.msg
    ...
    <6>scsi0 : sata_via
    <7>ata2: dev 0 cfg 49:2f00 82:746b 83:7f01 84:4023 85:7469 86:3c01 87:4023 88:80ff
    <6>ata2: dev 0 ATA, max UDMA7, 312581808 sectors: lba48
    <6>ata2: dev 0 configured for UDMA/133
    <6>scsi1 : sata_via
    <5> Vendor: ATA Model: SAMSUNG HD160JJ Rev: ZM10
    <5> Type: Direct-Access ANSI SCSI revision: 05
    <5>SCSI device sda: 312581808 512-byte hdwr sectors (160042 MB)
    <5>SCSI device sda: drive cache: write back
    <5>SCSI device sda: 312581808 512-byte hdwr sectors (160042 MB)
    <5>SCSI device sda: drive cache: write back
    <6> sda: sda1 sda2 < > sda3 sda4
    <5>Attached scsi disk sda at scsi0, channel 0, id 0, lun 0
    <5> Vendor: ATA Model: SAMSUNG HD160JJ Rev: ZM10
    <5> Type: Direct-Access ANSI SCSI revision: 05
    <5>SCSI device sdb: 312581808 512-byte hdwr sectors (160042 MB)
    <5>SCSI device sdb: drive cache: write back
    <5>SCSI device sdb: 312581808 512-byte hdwr sectors (160042 MB)
    <5>SCSI device sdb: drive cache: write back
    <6> sdb:<3>Buffer I/O error on device sda3, logical block 2361344
    <3>Buffer I/O error on device sda3, logical block 2361345
    <3>Buffer I/O error on device sda3, logical block 2361346
    <3>Buffer I/O error on device sda3, logical block 2361347
    <3>Buffer I/O error on device sda3, logical block 2361348
    <3>Buffer I/O error on device sda3, logical block 2361349
    <3>Buffer I/O error on device sda3, logical block 2361350
    <3>Buffer I/O error on device sda3, logical block 2361351
    <3>Buffer I/O error on device sda4, logical block 8241312
    <3>Buffer I/O error on device sda4, logical block 8241313
    <5>Attached scsi generic sg0 at scsi0, channel 0, id 0, lun 0, type 0
    ...
    <3>Buffer I/O error on device sda3, logical block 2361344
    ..
    Loading required kernel modules
    doneRestore device permissionsdone
    Warning: ignoring extra data in partition table 5
    Warning: ignoring extra data in partition table 5
    Warning: ignoring extra data in partition table 5
    Warning: invalid flag 0xffffbf76 of partition table 5 will be corrected by w(rite)
    Disk /dev/sdb doesn't contain a valid partition table
    Activating remaining swap-devices in /etc/fstab...

    I have installed it like you described, without any errors:
    -the line in /etc/sysconfig/kernel i have changed to
    INITRD_MODULES="sata_via via82cxxx processor thermal fan reiserfs dm-mod"

    -my /boot/grub/device.map
    (fd0) /dev/fd0
    (hd0) /dev/mapper/via_bdeaacdjjh

    -my /boot/grub/menu.lst
    color white/blue black/light-gray
    default 0
    timeout 8
    gfxmenu (hd0,3)/boot/message

    ###Don't change this comment - YaST2 identifier: Original name: windows###
    title Windows
    chainloader (hd0,0)+1

    ###Don't change this comment - YaST2 identifier: Original name: linux###
    title SUSE LINUX 10.0
    root (hd0,3)
    kernel /boot/vmlinuz root=/dev/mapper/via_bdeaacdjjh4 vga=0x317 selinux=0 resume=/dev/mapper/via_bdeaacdjjh3 splash=silent showopts
    initrd /boot/initrd

    ###Don't change this comment - YaST2 identifier: Original name: floppy###
    title Diskette
    chainloader (fd0)+1

    ###Don't change this comment - YaST2 identifier: Original name: failsafe###
    title Failsafe -- SUSE LINUX 10.0
    root (hd0,3)
    kernel /boot/vmlinuz root=/dev/mapper/via_bdeaacdjjh4 vga=normal showopts ide=nodma apm=off acpi=off noresume selinux=0 nosmp noapic maxcpus=0 edd=off 3
    initrd /boot/initrd

    linux:/home/mk # fdisk -l
    Warnung: ignoriere weitere Daten in Partitionstabelle 5
    Warnung: ignoriere weitere Daten in Partitionstabelle 5
    Warnung: ignoriere weitere Daten in Partitionstabelle 5
    Warnung: Schreiben wird ungültiges Flag 0xffffbf76 in Part.-tabelle 5 korrigiere n

    Platte /dev/sda: 160.0 GByte, 160041885696 Byte
    255 Köpfe, 63 Sektoren/Spuren, 19457 Zylinder
    Einheiten = Zylinder von 16065 × 512 = 8225280 Bytes

    Gerät boot. Anfang Ende Blöcke Id System
    /dev/sda1 * 1 1020 8193118+ 7 HPFS/NTFS
    /dev/sda2 1021 36715 286720087+ f W95 Erw. (LBA)
    /dev/sda3 36716 36862 1180777+ 82 Linux Swap / Solaris
    /dev/sda4 36863 38914 16482690 83 Linux
    /dev/sda5 ? 44606 181585 1100285363 3c PartitionMagic recovery

    Platte /dev/sdb: 160.0 GByte, 160041885696 Byte
    255 Köpfe, 63 Sektoren/Spuren, 19457 Zylinder
    Einheiten = Zylinder von 16065 × 512 = 8225280 Bytes

    Festplatte /dev/sdb enthält keine gültige Partitionstabelle


    linux:/home/mk # fdisk /dev/mapper/via_bdeaacdjjh

    Die Anzahl der Zylinder für diese Platte ist auf 38914 gesetzt.
    Daran ist nichts verkehrt, aber das ist größer als 1024 und kann
    in bestimmten Konfigurationen Probleme hervorrufen mit:
    1) Software, die zum Bootzeitpunkt läuft (z. B. ältere LILO-Versionen)
    2) Boot- und Partitionierungssoftware anderer Betriebssysteme
    (z. B. DOS FDISK, OS/2 FDISK)

    Befehl (m für Hilfe): p

    Platte /dev/mapper/via_bdeaacdjjh: 320.0 GByte, 320083770368 Byte
    255 Köpfe, 63 Sektoren/Spuren, 38914 Zylinder
    Einheiten = Zylinder von 16065 × 512 = 8225280 Bytes

    Gerät boot. Anfang Ende Blöcke Id System
    /dev/mapper/via_bdeaacdjjh1 * 1 1020 8193118+ 7 HPFS/NTFS
    /dev/mapper/via_bdeaacdjjh2 1021 36715 286720087+ f W95 Erw. (LBA)
    /dev/mapper/via_bdeaacdjjh3 36716 36862 1180777+ 82 Linux Swap / Solaris
    /dev/mapper/via_bdeaacdjjh4 36863 38914 16482690 83 Linux
    /dev/mapper/via_bdeaacdjjh5 1021 3570 20482843+ 7 HPFS/NTFS
    /dev/mapper/via_bdeaacdjjh6 3571 6120 20482843+ 7 HPFS/NTFS
    /dev/mapper/via_bdeaacdjjh7 6121 18868 102398278+ 7 HPFS/NTFS
    /dev/mapper/via_bdeaacdjjh8 18869 31616 102398278+ 7 HPFS/NTFS
    /dev/mapper/via_bdeaacdjjh9 31617 36575 39833136 7 HPFS/NTFS
    /dev/mapper/via_bdeaacdjjh10 36576 36715 1124518+ b W95 FAT32


    my /etc/fstab
    /dev/mapper/via_bdeaacdjjh4 / reiserfs acl,user_xattr 1 1
    /dev/mapper/via_bdeaacdjjh1 /windows/C ntfs noauto,ro,users,gid=users,umask=0002,nls=utf8 0 0
    /dev/mapper/via_bdeaacdjjh5 /windows/D ntfs noauto,ro,users,gid=users,umask=0002,nls=utf8 0 0
    /dev/mapper/via_bdeaacdjjh6 /windows/E ntfs noauto,ro,users,gid=users,umask=0002,nls=utf8 0 0
    /dev/mapper/via_bdeaacdjjh7 /windows/F ntfs noauto,ro,users,gid=users,umask=0002,nls=utf8 0 0
    /dev/mapper/via_bdeaacdjjh8 /windows/G ntfs noauto,ro,users,gid=users,umask=0002,nls=utf8 0 0
    /dev/mapper/via_bdeaacdjjh9 /windows/H ntfs noauto,ro,users,gid=users,umask=0002,nls=utf8 0 0
    /dev/mapper/via_bdeaacdjjh10 /windows/I vfat users,gid=users,umask=0002,utf8=true 0 0
    /dev/mapper/via_bdeaacdjjh3 swap swap defaults 0 0
    proc /proc proc defaults 0 0
    sysfs /sys sysfs noauto 0 0
    usbfs /proc/bus/usb usbfs noauto 0 0
    devpts /dev/pts devpts mode=0620,gid=5 0 0
    /dev/dvdram /media/dvdram subfs noauto,fs=cdfss,ro,procuid,nosuid,nodev,exec,iocharset=utf8 0 0
    /dev/fd0 /media/floppy subfs noauto,fs=floppyfss,procuid,nodev,nosuid,sync 0 0

    greets
    markes
     
  15. HBauer

    HBauer New Member

    I installed everything following the Howto on my system using a via SATA-controller. But booting the installed system results in a lot of timeouts for the underlying SATA (/dev/sda, /dev/sdb) disks. Also the "missing grep" appears. Any recommendations?

    Greetings, HB

    BTW: Isn't it possible to transfer the part you did using the Gentoo Live-CD to the installed SuSE using chroot?
     
  16. markes

    markes New Member

    The warnings: "grep: command not found" is produced by this Code in line 1971 from mkinitrd:
    cat_linuxrc <<-EOF
    |# Workaround: dmraid should not probe cdroms, but it does.
    |# We'll remove all cdrom device nodes till dmraid does this check by itself.
    |for y in hda hdb hdc hdd hde hdf hdg hdh sr0 sr1 sr2 sr3;
    |do
    | if (grep -q "$y" /proc/sys/dev/cdrom/info)
    | then
    | rm -f /dev/"$y"
    | fi
    |done
    |# Now we can load dmraid
    |dmraid -ay -i
    EOF

    Solution:
    Find out on wich port your cdrom drive hangs and delete all the other registers. In my case my cdrom hangs on Secondary Port as Master and so "hdc". So i deletetd all except hdc (|for y in hdc;) and save the file. After that you have to type mkinitrd in console as root.

    The warnings
    ...
    Mar 19 08:58:44 linux kernel: attempt to access beyond end of device
    Mar 19 08:58:44 linux kernel: sda: rw=0, want=312581850, limit=312581808
    Mar 19 08:58:44 linux kernel: Buffer I/O error on device dm-0, logical block 312581804
    Mar 19 08:58:44 linux kernel: attempt to access beyond end of device
    Mar 19 08:58:44 linux kernel: sda: rw=0, want=312581852, limit=312581808
    Mar 19 08:58:44 linux kernel: Buffer I/O error on device dm-0, logical block 312581805
    Mar 19 08:58:44 linux kernel: attempt to access beyond end of device
    Mar 19 08:58:44 linux kernel: sda: rw=0, want=312581854, limit=312581808
    Mar 19 08:58:44 linux kernel: Buffer I/O error on device dm-0, logical block 312581806
    etc

    are produced from your linux kernel. You have to fix your kernel if these warnings will be awkward for you:

    Code:
    diff -Nur linux-2.6.15/fs/partitions/check.c linux-2.6.15-check/fs/partitions/check.c
    --- linux-2.6.15/fs/partitions/check.c	2006-01-03 04:21:10.000000000 +0100
    +++ linux-2.6.15-check/fs/partitions/check.c	2006-02-08 21:20:03.000000000 +0100
    @@ -175,8 +175,19 @@
     		memset(&state->parts, 0, sizeof(state->parts));
     		res = check_part[i++](state, bdev);
     	}
    -	if (res > 0)
    +	if (res > 0) {
    +		sector_t from, cap;
    +		for(i = 1; i < state->limit; i++) {
    +			from = state->parts[i].from;
    +			cap = get_capacity(hd);
    +			if(state->parts[i].size + from > cap) {
    +				printk(KERN_WARNING " %s: partition %s%d beyond device capacity\n",
    +						hd->disk_name, hd->disk_name, i);
    +				state->parts[i].size = cap - (from < cap ? from : cap);
    +			}
    +		}
     		return state;
    +	}
     	if (!res)
     		printk(" unknown partition table\n");
     	else if (warn_no_part)
    Look at http://tienstra4.flatnet.tudelft.nl/~gerte/gen2dmraid/ and https://www.redhat.com/archives/ataraid-list/2006-February/msg00015.html for further information. On http://tienstra4.flatnet.tudelft.nl/~gerte/gen2dmraid/gen2dmraid-2.0.iso you can also download the Gentoo based LiveCD with dmraid-1.0.0-rc9. So you can use Gentoo directly without installing dmraid.

    greets
    markes
     
    Last edited: Mar 19, 2006
  17. HBauer

    HBauer New Member

    Thanks for your answer, but that's not exactly my problem. ;)
    Booting the original SuSE-kernel results in endless hanging periods (ata1/2 timeout command 0x?? stats 0x?? host_stats 0x??).
    Booting a reduced kernel I compiled myself results in a kernel panic:
    Code:
    waiting for device /dev/mapper/via_ebdfgdfgeg2 to appear: ok
    no record for 'mapper/via_ebdfgdfgeg2' in database
    rootfs: major=254 minor=2 devn=65026
    Mounting root /dev/mapper/via_ebdfgdfgeg2
    mount: no such device
    umount2: device or ressource busy
    Kernel panic - not syncing: Attempted to kill init!
    I suspect udev to be responsible for that. Does anybody know the exact reason?

    Booting from a separate hd the same modified kernel (my own compilation) drastically reduces the timeout hanging time.

    Any suggestions about that? :)

    Greetings, HB
     
    Last edited: Mar 20, 2006
  18. markes

    markes New Member

    hmmm, seems to be an fstab or mkinitrd problem. Have you tryed also mkintrd from crushton?

    greets
    markes
     
  19. HBauer

    HBauer New Member

    Yes, it's exactly the one I tried. I tried Fedora 5 and Gentoo, both of them work with the same hardware, but I don't know where to start with the analysis of mkinitrd...

    Greetings, HB
     
  20. mgosr

    mgosr New Member

    For now...

    If you aren't too crazy about which distro, RedHat Fedora Core 5 [Bordeaux] set up (AND BOOTED!!) with out a glitch on my via sata raid 0 and opteron 242*2. Could be best to get it and go on, at least for now.

    Eventually maybe we'll have answers to why 64-bit technology and sata raid has been so lagging in support. Our hardware will be outdated before working solutions arrive. Anyone know if there is a 64-bit flash plug-in?
     

Share This Page