Resize RAID 1 Partitions Software RAID on CentOS

Discussion in 'Installation/Configuration' started by IPXVII, Nov 18, 2014.

  1. IPXVII

    IPXVII New Member

    Hey Guys,

    I have made a huge mistake with my server configuration and getting now really in trouble. I'm running a WHM/cPanel server on my CentOS, it's a production server with some websites on.
    My root partitions "/" is running out of space and i need immediately to do something.

    Resize RAID partitions is a task above my abilities so i start to read everything about it. Unfortunately this is very complected so maybe you guys can help.

    In this step of my research & learn process i have found out what i need to do, but the how is very complected.

    My server has two partitions:

    /dev/md2: 21,0GB
    /dev/md3: 1979GB

    Unfortunately md2 is the root partition and to resize it, the server needs to got in the "rescue mode" which means in the end a longer down time.

    So I decide to shrink md3 and make two new partition for /var and /usr.

    On this point a few server prints:

    Code:
    [B]root@ns506372 [~]# df -h[/B]
    Filesystem            Size  Used Avail Use% Mounted on
    rootfs                 20G   17G  1,9G  91% /
    /dev/root              20G   17G  1,9G  91% /
    devtmpfs               32G  276K   32G   1% /dev
    /dev/md3              1,8T   76G  1,7T   5% /home
    tmpfs                  32G     0   32G   0% /dev/shm
    /dev/root              20G   17G  1,9G  91% /var/tmp
    /dev/root              20G   17G  1,9G  91% /var/named/chroot/etc/named
    /dev/root              20G   17G  1,9G  91% /var/named/chroot/etc/named.rfc1912.zones
    /dev/root              20G   17G  1,9G  91% /var/named/chroot/etc/rndc.key
    /dev/root              20G   17G  1,9G  91% /var/named/chroot/usr/lib64/bind
    /dev/root              20G   17G  1,9G  91% /var/named/chroot/etc/named.iscdlv.key
    /dev/root              20G   17G  1,9G  91% /var/named/chroot/etc/named.root.key
    ftpback-XXXXXX.net:/export/ftpbackup/XXXXXXXX.net              500G  196G  305G  40% /backup
    
    Code:
    [B]cat /proc/mdstat[/B]
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] 
    md2 : active raid1 sdb2[1] sda2[0]
          20478912 blocks [2/2] [UU]
          
    md3 : active raid1 sdb3[1] sda3[0]
          1932506048 blocks [2/2] [UU]
    
    Code:
    [B]root@ns506372 [~]# mdadm --misc --detail /dev/md2[/B]
    /dev/md2:
            Version : 0.90
      Creation Time : Tue Sep 16 13:52:02 2014
         Raid Level : raid1
         Array Size : 20478912 (19.53 GiB 20.97 GB)
      Used Dev Size : 20478912 (19.53 GiB 20.97 GB)
       Raid Devices : 2
      Total Devices : 2
    Preferred Minor : 2
        Persistence : Superblock is persistent
    
        Update Time : Tue Nov 18 13:58:11 2014
              State : clean 
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
    
               UUID : 94c54da7:d37eee63:a4d2adc2:26fd5302
             Events : 0.152
    
        Number   Major   Minor   RaidDevice State
           0       8        2        0      active sync   /dev/sda2
           1       8       18        1      active sync   /dev/sdb2
    

    Code:
    [B]root@ns506372 [~]# mdadm --misc --detail /dev/md3[/B]
    /dev/md3:
            Version : 0.90
      Creation Time : Tue Sep 16 13:52:03 2014
         Raid Level : raid1
         Array Size : 1932506048 (1842.98 GiB 1978.89 GB)
      Used Dev Size : 1932506048 (1842.98 GiB 1978.89 GB)
       Raid Devices : 2
      Total Devices : 2
    Preferred Minor : 3
        Persistence : Superblock is persistent
    
        Update Time : Tue Nov 18 13:58:19 2014
              State : clean 
     Active Devices : 2
    Working Devices : 2
     Failed Devices : 0
      Spare Devices : 0
    
               UUID : 52bfd18b:4f959e4d:a4d2adc2:26fd5302
             Events : 0.830
    
        Number   Major   Minor   RaidDevice State
           0       8        3        0      active sync   /dev/sda3
           1       8       19        1      active sync   /dev/sdb3
    

    Code:
    root@ns506372 [~]# [B]parted -l[/B]
    Modell: ATA HGST HUS724020AL (scsi)
    Festplatte  /dev/sda:  2000GB
    Sektorgröße (logisch/physisch): 512B/512B
    Partitionstabelle: gpt
    
    Nummer  Anfang  Ende    Größe   Dateisystem     Name     Flags
     1      20,5kB  1049kB  1029kB                  primary  bios_grub
     2      2097kB  21,0GB  21,0GB  ext3            primary  raid
     3      21,0GB  2000GB  1979GB  ext3            primary  raid
     4      2000GB  2000GB  536MB   linux-swap(v1)  primary
    
    
    Modell: ATA HGST HUS724020AL (scsi)
    Festplatte  /dev/sdb:  2000GB
    Sektorgröße (logisch/physisch): 512B/512B
    Partitionstabelle: gpt
    
    Nummer  Anfang  Ende    Größe   Dateisystem     Name     Flags
     1      20,5kB  1049kB  1029kB                  primary  bios_grub
     2      2097kB  21,0GB  21,0GB  ext3            primary  raid
     3      21,0GB  2000GB  1979GB  ext3            primary  raid
     4      2000GB  2000GB  536MB   linux-swap(v1)  primary
    
    
    Modell: Unbekannt (unknown)
    Festplatte  /dev/md2:  21,0GB
    Sektorgröße (logisch/physisch): 512B/512B
    Partitionstabelle: loop
    
    Nummer  Anfang  Ende    Größe   Dateisystem  Flags
     1      0,00B   21,0GB  21,0GB  ext3
    
    
    Modell: Unbekannt (unknown)
    Festplatte  /dev/md3:  1979GB
    Sektorgröße (logisch/physisch): 512B/512B
    Partitionstabelle: loop
    
    Nummer  Anfang  Ende    Größe   Dateisystem  Flags
     1      0,00B   1979GB  1979GB  ext3
    
    
    It's a GPT (GUID Partition Table) System.

    I have found this tutorial to Shrink & Grow a Software RAID, but I'm not sure if this gone work for me.
    http://www.howtoforge.com/how-to-resize-raid-partitions-shrink-and-grow-software-raid
    By the side the tutorial is from 2008.

    Falko describes the usage of mdadm and resize2fs, but i don't get the point of how mdadm and resize2fs working together.
    The other question is, if I'm using "mdadm --grow /dev/md3 --size=XXX" dose it effect on both disks?

    I'm trying to get this result:
    md02 = 20GB = mount on /
    md03 = 1TB = mount on /home
    md04 = 500GB = mount on /var
    md05 = 250GB = mount on /usr

    I'm happy about every advice.
     
    Last edited: Nov 18, 2014
  2. srijan

    srijan New Member HowtoForge Supporter

Share This Page