Question on raid 1 performance...

Discussion in 'Technical' started by itwillcome, Jun 30, 2009.

  1. itwillcome

    itwillcome New Member

    Dear all,

    Thanks for the crew creating HowtoForge and ISPConfig. Great resources and software! :D

    Now coming my question. :p

    I have an ISPConfig 2 server running on Fedora 6.
    As the rpm repository of Fedora 6 is not updated anymore, I'm planning to migrate the data to a new server.

    I first set up a CentOS 5.3 server on a single 320G SATA hardisk. After successful installation, I plug in a second 320G harddisk and turn the system to software RAID 1 configuration with the following guide:
    http://www.howtoforge.net/software-raid1-grub-boot-fedora-8

    The synchronization of both harddisks takes more than 30 hours to finish. ( 'cat /proc/mdstat' show that the sync speed is around 3Mb/s )

    Then I connect the new server and the old server with a giga-switch.
    The data of the old server is made into several tar.gz files. Each tar.gz file is around 1G in size. I put the tar.gz files in a web-accessiable folder.

    On the new server, I start wget to get the tar.gz files from the old server using HTTP protocol.

    When I wget a 1G file, it starts download very fast for the first several hundred Megs. The first 40% of the file is transferred at 300Mb/s. But then it drops to 300Kb/s for the remaining 60%. (shown by the wget progress bar)

    300Kb/s speed for internal giga-network seems not acceptable. I don't if it is
    the WRITE limit of the my RAID 1 system? This makes me hestitate now to switch the old server to a new one. :(

    I search from internet that software RAID-1 drops about 10-20% in read/write performance.

    Is there any method to optimize the RAID-1 performance?? Thanks in advance.
     
  2. id10t

    id10t Member

    I put my home directory on raid 1 (w/ sata drives) a month or so ago...

    Here's write speed -

    Code:
    $ time dd if=/dev/zero of=afile bs=1024 count=1000000
    1000000+0 records in
    1000000+0 records out
    1024000000 bytes (1.0 GB) copied, 22.0697 s, 46.4 MB/s
    
    real	0m22.115s
    user	0m0.240s
    sys	0m6.316s
    
    Here's reading from raid1 and writing to a plain ol' SATA drive -
    Code:
    
    $ time dd if=afile of=/oldhome/me/afile
    2000000+0 records in
    2000000+0 records out
    1024000000 bytes (1.0 GB) copied, 36.9429 s, 27.7 MB/s
    
    real	0m37.283s
    user	0m0.820s
    sys	0m11.637s
    
    
    And here's reading from raid1 and writing to ram (/dev/shm)

    Code:
    
    $ time dd if=afile of=/dev/shm/afile
    2000000+0 records in
    2000000+0 records out
    1024000000 bytes (1.0 GB) copied, 21.5813 s, 47.4 MB/s
    
    real	0m21.584s
    user	0m0.796s
    sys	0m6.576s
    
    
    So it looks like the slow spot in my system is the plain 'ol SATA drive - reading and writing to/from my raid1 is faster by about 2x.

    HTH
     
    Last edited: Jun 30, 2009

Share This Page