Ciao HowToForge Staff, I am a raid1 newbie, I read all the guides I've found about but I still can not understand the first things to do in my situation with 3 disk in Hetzner... the first 2 disk are in raid1 (I think) before to install all the server I strictly need to make a test as (for example) degarde a disk and simulate the replace and rebuild the array there are any problem to make experiments the server is all to configure Code: [root@server ~]# fdisk -l Disco /dev/sda: 1500.3 GB, 1500301910016 byte 255 testine, 63 settori/tracce, 182401 cilindri Unità = cilindri di 16065 * 512 = 8225280 byte Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Identificativo disco: 0x0008f32d Dispositivo Boot Start End Blocks Id System /dev/sda1 1 262 2102562 fd Autorilevamento raid di Linux /dev/sda2 263 295 265072+ fd Autorilevamento raid di Linux /dev/sda3 296 182401 1462766445 fd Autorilevamento raid di Linux Disco /dev/sdc: 1500.3 GB, 1500301910016 byte 255 testine, 63 settori/tracce, 182401 cilindri Unità = cilindri di 16065 * 512 = 8225280 byte Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Identificativo disco: 0x00064a51 Dispositivo Boot Start End Blocks Id System /dev/sdc1 1 182401 1465136001 83 Linux Disco /dev/sdb: 1500.3 GB, 1500301910016 byte 255 testine, 63 settori/tracce, 182401 cilindri Unità = cilindri di 16065 * 512 = 8225280 byte Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Identificativo disco: 0x00058c00 Dispositivo Boot Start End Blocks Id System /dev/sdb1 1 262 2102562 fd Autorilevamento raid di Linux /dev/sdb2 263 295 265072+ fd Autorilevamento raid di Linux /dev/sdb3 296 182401 1462766445 fd Autorilevamento raid di Linux Disco /dev/md0: 2152 MB, 2152923136 byte 2 testine, 4 settori/tracce, 525616 cilindri Unità = cilindri di 8 * 512 = 4096 byte Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Identificativo disco: 0x00000000 Il disco /dev/md0 non contiene una tabella delle partizioni valida Disco /dev/md1: 271 MB, 271319040 byte 2 testine, 4 settori/tracce, 66240 cilindri Unità = cilindri di 8 * 512 = 4096 byte Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Identificativo disco: 0x00000000 Il disco /dev/md1 non contiene una tabella delle partizioni valida Disco /dev/md2: 1497.9 GB, 1497872728064 byte 2 testine, 4 settori/tracce, 365691584 cilindri Unità = cilindri di 8 * 512 = 4096 byte Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Identificativo disco: 0x00000000 Il disco /dev/md2 non contiene una tabella delle partizioni valida [root@server ~]# Code: [root@server ~]# df -h Filesystem Size Used Avail Use% Montato su /dev/md2 1,4T 809M 1,3T 1% / /dev/md1 251M 40M 198M 17% /boot Code: [root@server ~]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 sda3[0] sdb3[1] 1462766336 blocks [2/2] [UU] md1 : active raid1 sda2[0] sdb2[1] 264960 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 2102464 blocks [2/2] [UU] unused devices: <none> thx GioMBG
This page tells you how to replace a disk in a RAID1 array: http://www.howtoforge.com/how-to-se...em-incl-grub2-configuration-debian-squeeze-p4
Recovering... SO: You are the number one! Hi Falko! BIG respect to You and HowToForge! was 10 days that I search exactly what You show me now! recovering... Code: [root@server ~]# cat /proc/mdstat Personalities : [raid1] md2 : active raid1 sdb3[2] sda3[0] 1462766336 blocks [2/1] [U_] [>....................] recovery = 0.3% (5676736/1462766336) finish=477.4min speed=50865K/sec md1 : active raid1 sdb2[1] sda2[0] 264960 blocks [2/2] [UU] md0 : active raid1 sdb1[1] sda1[0] 2102464 blocks [2/2] [UU] unused devices: <none> the only think that I've don't understand is here: exactly where we read the degraded status? really thanks really respect GioMBG
Failed Raid-1 My system was originally setup with two identical Segate 1TB drives and partition as follow: /dev/md0 /boot Raid-1 /dev/md2/ / Raid-1 /dev/md3/ /var/data Raid-1 Here is the output from the mdstat command that I ran from bootable BT4 CD as I was not able to boot the actual system that was configured as a Raid-1: # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md3 : active raid1 sdb2[1] 870040128 blocks [2/1] [_U] md2 : active raid1 sdb3[1] 102398208 blocks [2/1] [_U] md0 : active raid1 sda1[0] 104320 blocks [2/1] [U_] unused devices: <none> Does this mean that both drives have failed? At this point, I do not care if I rebuild or fix the Raid-1 but at least I would like to recover my data that is stored on md3. How do I proceed? Any help will be greatly appreciated. Thank you. Kris
/dev/sda2, /dev/sda3, and /dev/sdb1 have failed. Can you try to rebuild the arrays as follows? Code: mdadm --manage /dev/md0 --fail /dev/sdb1 mdadm --manage /dev/md2 --fail /dev/sda3 mdadm --manage /dev/md3 --fail /dev/sda2 mdadm --manage /dev/md0 --remove /dev/sdb1 mdadm --manage /dev/md2 --remove /dev/sda3 mdadm --manage /dev/md3 --remove /dev/sda2 mdadm --zero-superblock /dev/sdb1 mdadm --zero-superblock /dev/sda3 mdadm --zero-superblock /dev/sda2 mdadm -a /dev/md0 /dev/sdb1 mdadm -a /dev/md2 /dev/sda3 mdadm -a /dev/md3 /dev/sda2
delete /dev/md0 on /dev/sdc I see these 3 not valid partitions on /dev/sdc Code: [root@server mail]# fdisk -l | grep /dev/sdc Il disco /dev/md0 non contiene una tabella delle partizioni valida Il disco /dev/md1 non contiene una tabella delle partizioni valida Il disco /dev/md2 non contiene una tabella delle partizioni valida Disco /dev/sdc: 1500.3 GB, 1500301910016 byte /dev/sdc1 1 182401 1465136001 83 Linux I can delete? (this is the 3th hard disk for backup and is not used now for raid) Code: mdadm --fail /dev/md0 /dev/sdc mdadm --remove /dev/md0 /dev/sdc thx GioMBG
If i'm not mistaken: Don't forget to install grub on the MBR of the new disc if you replaced sda! Code: grub-install --no-floppy --recheck /dev/sda If you have your bios setup to boot from the first disc only, and you replace it with a new one, you must install grub on it's MBR since the MBR is outside of the scope of the md devices
Thanks Falko. I will try to rebuild my array today in accordance with your recommendation. I will let how I made out.