Hey, I recently tried the software RAID1 tutorial for Debian Etch on a virtual machine, it worked flawlessly. Now when I tried to implement it into a working environment with a real server and 2x80 GB SATA hard drives, the /boot and swap partitions synchronize properly, but when it do the root partition, I get an error saying that the device is busy or unavailable. I tried forcing it, unmounting and remounting it, nothing works. I've searched Google for people having the same problem and nobody can fix it for some reason, any ideas? Just something I think I should mention though I doubt it makes a difference, the hard drives are a Seagate and a Western Digital. I tried this with 2 20 GB Maxtors of the same model and still a no go.
Do you mean this tutorial? http://www.howtoforge.com/software-raid1-grub-boot-debian-etch At which step exactly are you facing this problem?
Yeah, thats the tutorial I am having trouble with. The particular section that I am talking about is in section 7, Preparing /dev/sda. Its these steps in particular, Code: mdadm --add /dev/md0 /dev/sda1 mdadm --add /dev/md1 /dev/sda2 mdadm --add /dev/md2 /dev/sda3 The first two commands work flawlessly and I can use the watch command to view the synchronization in progress, but when I do the sda3 command I get an error saying the device is busy or unavailable. Again I've search around Google with people with the same problem and it seems to be a serious problem. I've tried remounting, forcing the synchronization with the --force attribute.
Hi Falko, Sorry to hijack an old thread but I'm having a similar problem with this how-to. I have two partitions, a 2GB swap and a 150GB+ ext3 partition. I'm trying to make md0 (sda1+sdb1) and md1 (sda2+sdb2) respectively out of these. Following the how-to I've managed to make a RAID1 md0 swap volume. Unfortunately I get the following error when attempting to add sda2 to md1: Code: root@lime:~# mdadm --add /dev/md1 /dev/sda2 mdadm: Cannot open /dev/sda2: Device or resource busy Here is the output of /proc/mdstat: Code: root@lime:~# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] md1 : active raid1 sdb2[1] 154191744 blocks [2/1] [_U] md0 : active raid1 sda1[0] sdb1[1] 2096384 blocks [2/2] [UU] And fdisk -l: Code: root@lime:~# fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 261 2096451 fd Linux raid autodetect /dev/sda2 262 19457 154191870 fd Linux raid autodetect Disk /dev/sdb: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 261 2096451 fd Linux raid autodetect /dev/sdb2 262 19457 154191870 fd Linux raid autodetect Disk /dev/md0: 2146 MB, 2146697216 bytes 2 heads, 4 sectors/track, 524096 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/md0 doesn't contain a valid partition table Disk /dev/md1: 157.8 GB, 157892345856 bytes 2 heads, 4 sectors/track, 38547936 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk /dev/md1 doesn't contain a valid partition table Any help appreciated!
Hi, I'm having the same problem. After installing CentOS there's almost nothing I can do with my arrays. I always get: Please help.
Yes it is but I get the same error for /dev/md1 and /dev/md2. Booting from CentOS DVD (by entering linux rescue) doesn't actually help. Everything remains the same.
I can't access the system at the moment but /boot is mounted on /dev/md0, / is mounted on /dev/md1 and /tmp is mounted on /dev/md2.
That's what I see on my screen I was sitting at it all weekend and made some tests with VMWare Server. It turned out that SELinux was the cause. I've disabled it and the problem dissapeared. What a waste of time because of this! The server at work is also working ok now. Meanwhile we're waiting for response from Intel about that Fake Raid problem. I've got one more question: Please advise what to do with swap. Should it be a part of RAID? Or should it be on a swap partition of its own ? Wouldn't the system crash if the disk failed leaving part of swap space unreachable for the system ? Right now swap is inside the RAID. It might increase i/o traffic a little bit.
Sorry, that system is gone now. It turned out the solution was disabling SELinux. Now everything is working fine and the machine will soon go online.