Hi, Please help, I have the following output, does this show two failed drives in a 2 drive mdadm raid? # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] md1 : active raid1 sdb1[1] sda1[2](F) 20478912 blocks [2/1] [_U] md2 : active raid1 sdb2[1] sda2[2](F) 955754432 blocks [2/1] [_U] unused devices: <none>
thanks for the insight, I can remove sda from the array and disable raid for quick fix, is that correct?
No need to disable raid. I assume sda is the one which failed, please make really sure that's the case at your setup before doing anything, else you would need to adjust the commands 1. Check your Raid Code: mdadm --detail /dev/md1 mdadm --detail /dev/md2 2. Remove failed HDD from Array Code: mdadm /dev/md1 -r /dev/sda1 mdadm /dev/md2 -r /dev/sda2 Just in case it won't let you remove the faulty drive, you can set it faulty using Code: mdadm --manage /dev/md1 --fail /dev/sda1 mdadm --manage /dev/md2 --fail /dev/sda2 3. Wait for replacement of the faulty disk. 4. With the new HDD in place begin repair If you are using GPT use this to prepare the new drive: Code: # Copy GPT from sdb to sda sgdisk -R /dev/sda /dev/sdb # Create new UUID sgdisk -G /dev/sda If you are using MBR use this to prepare the new drive: Code: # copy MBR from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # reread partitions of sda sfdisk -R /dev/sda 5. Let the new disk partitions rejoin your raid: Code: mdadm /dev/md1 -a /dev/sda1 mdadm /dev/md2 -a /dev/sda2 6. Don't forget to reconfigure your boot-manager Code: # if you booted the system grub-install /dev/sda Dont make me responsible if I did a mistake here or it doesn't work for you