Disk benchmarks: Difference between revisions

From DaqWiki
Jump to navigation Jump to search
Line 48: Line 48:
* 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID6 Z87 1x88SE9485: raw: read: 673 M/s ave first 30 sec, write: ???, xfs: write /dev/zero 800->??? M/s, ??? M/s ave across ??? TB, xfs read: 530-620->??? M/s, ??? M/s ave across ??? TB, strip_cache_size 32k
* 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID6 Z87 1x88SE9485: raw: read: 673 M/s ave first 30 sec, write: ???, xfs: write /dev/zero 800->??? M/s, ??? M/s ave across ??? TB, xfs read: 530-620->??? M/s, ??? M/s ave across ??? TB, strip_cache_size 32k
* 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID6 Z87 2x88SE9230: raw: read: 430 M/s ave first 30 sec, write: ???, xfs: write /dev/zero 860->400 M/s, 660 M/s ave across 24 TB, xfs read: ~500->~500 M/s ave across 20TB TB, strip_cache_size 32k.
* 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID6 Z87 2x88SE9230: raw: read: 430 M/s ave first 30 sec, write: ???, xfs: write /dev/zero 860->400 M/s, 660 M/s ave across 24 TB, xfs read: ~500->~500 M/s ave across 20TB TB, strip_cache_size 32k.
* 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID0 Z87 2x88SE9230: raw: read: 1150 M/s ave first 30 sec (80% busy), write: 1150 M/s ave first 30 sec (100% busy), xfs: write /dev/zero ???->??? M/s, ??? M/s ave across ?? TB, xfs read: ???->??? M/s ave across ??TB TB


Note: assuming 150 M/s disks, max theoretical write speeds are:
Note: assuming 150 M/s disks, max theoretical write speeds are:
Line 53: Line 54:
* 6 disk raid6 -> 4*150 -> 600 M/s
* 6 disk raid6 -> 4*150 -> 600 M/s
* 8 disk raid6 -> 6*150 -> 900 M/s
* 8 disk raid6 -> 6*150 -> 900 M/s
* 8 disk raid0 -> 8*150 -> 1200 M/s

Revision as of 21:14, 28 November 2015

Individual disks

  • WDC WD30EZRX-00MMMB0 - read rate: 130->??? M/s, ave ??? M/s, write rate: 130->50 M/s, ave 100 M/s
  • WDC WD40EZRX-00SPEB0 - read rate: 150->70 M/s, ave 120 M/s, write rate: 150->70 Mbytes/sec
  • WDC WD4001FAEX-00MJRA0 - read rate: 150->80 M/s, ave 130 M/s
  • WDC WD4003FZEX-00Z4SA0 - read rate: 180->100 M/s, ave 150 M/s

Raid interfaces

Marvell 88SE9485 (PCIe x8 to 8 SAS/SATA 6Gb/s Ports I/O Controller)

lspci: LnkSta: Speed 5GT/s, Width x8

  • per-port speed: full disk speed on each port
  • total read speed vs number of active disks, using 4TB disks:
    • 1 - 150 M/s - 1x
    • 2 - 273 M/s - 1.8x
    • 3 - 430 M/s - 2.8x
    • 4 - 544 M/s - 3.62x
    • 5 - 573 M/s - 3.82x
    • 6 - 593 M/s - 3.95x
    • 7 - 623 M/s - 4.15x
    • 8 - 640 M/s - 4.2x
  • conclusion: 88SE9485 saturates at 4 disks (correct PCI config confirmed - x8 at 5GT/s). Adding 4 more disks do not increase data rate.

Marvell 88SE9230 (4 SATA)

lspci:

  • LnkSta: Speed 5GT/s, Width x2 (on Z87 mobo)
  • LnkSta: Speed 5GT/s, Width x2 (plugin board)
  • per-port speed: full disk speed on each port
  • total read speed vs number of active disks, using 4TB disks:
    • 1 - 150 - 1x
    • 2 - 300 - 2x
    • 3 - 460 - 3x
    • 4 - 620 - 4.1x
    • 5 - 770 - 5.1x
    • 6 - 913 - 6.1x
    • 7 - 1100 - 7.3x (WD4003FZEX)
    • 8 - 1240 - 8.3x (WD4001FAEX)
  • conclusion: configuration with dual 88SE9230 does not saturate up to 8 disks.

Raid arrays

  • 4xWD40EZRX RAID5 Z87 88SE9230, 88SE9485: raw: read: 420 M/sec, write: ~400 M/s. xfs: read/write: 400->300 M/s (10min avg)
  • 6xWD40EZRX RAID6 Z87 1x88SE9485: raw: .... xfs: write /dev/null 600->300 M/s, 443 M/s ave across 17TB, xfs read: 550->300 M/s, 440 M/s ave across 17TB, strip_cache_size 32k.
  • 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID6 Z87 1x88SE9485: raw: read: 673 M/s ave first 30 sec, write: ???, xfs: write /dev/zero 800->??? M/s, ??? M/s ave across ??? TB, xfs read: 530-620->??? M/s, ??? M/s ave across ??? TB, strip_cache_size 32k
  • 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID6 Z87 2x88SE9230: raw: read: 430 M/s ave first 30 sec, write: ???, xfs: write /dev/zero 860->400 M/s, 660 M/s ave across 24 TB, xfs read: ~500->~500 M/s ave across 20TB TB, strip_cache_size 32k.
  • 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID0 Z87 2x88SE9230: raw: read: 1150 M/s ave first 30 sec (80% busy), write: 1150 M/s ave first 30 sec (100% busy), xfs: write /dev/zero ???->??? M/s, ??? M/s ave across ?? TB, xfs read: ???->??? M/s ave across ??TB TB

Note: assuming 150 M/s disks, max theoretical write speeds are:

  • 4 disk raid5 -> 3*150 -> 450 M/s
  • 6 disk raid6 -> 4*150 -> 600 M/s
  • 8 disk raid6 -> 6*150 -> 900 M/s
  • 8 disk raid0 -> 8*150 -> 1200 M/s