> We have tested operation of MIDAS using a 10GigE network connection. Using a dummy frontend
> generating fake data, we can record MIDAS data to disk at at least 700 Mbytes/sec as reported by
> the MIDAS status page.
>
> Details of the hardware:
>
> 1) the disk server machine CPU is 3.4GHz Intel i7-4770, mobo is ASUS Z87 WS (10 SATA, 2xGigE),
> RAM is 32GB DDR3-1600.
> 2) disk array is 8x4TB Seagate ST4000VN000-1H4168 NAS disks RAID0 (striped) configuration, raw
> data read/write rate is around 1 GByte/sec, disks are directly attached to mobo (no raid card), linux
> software raid.
>
These tests were done using a raid0 array (striped), which is not suitable for production use.
For production use, RAID5 and RAID6 is recommended. But their default configuration has severely reduced performance (50% of
RAID0) this is because internally the raid driver issues disk read operations that compete against and severely slow down the disk write
requests. This is easy to see with "iostat -x 1" - when writing to the raid array, there should be no reads from the disks. Following
changes are required to achieve maximum performance:
echo 32000 > /sys/block/md6/md/stripe_cache_size # increase internal memory buffers - because "raid write" is always "read-
modify-write", bigger buffers ensure that the reads are done from cache, not from phsyical disk
mdadm --grow --bitmap=/md6bitmap /dev/md6 # use external bitmap - if bitmap is internal, there is a large number of disk reads
competing against writes. external bitmap seems to help quite a bit.
With these settings, my RAID6 array can read and write at about 700-900 Mbytes/sec - this is comparable to RAID0 (minus 2 disks).
With this, I repeated the MIDAS performance tests - (but without 10GigE) - MIDAS can write 700 Mbytes/sec of fake data to a local
RAID6 data array. (hardware configuration is listed above).
K.O. |