Disk benchmarks: Difference between revisions

From DaqWiki
Jump to navigation Jump to search
m (48 revisions imported)
 
(13 intermediate revisions by the same user not shown)
Line 1: Line 1:
= Individual disks =
= Individual disks =


* WDC WD30EZRX-00MMMB0 - read rate: 130->??? M/s, ave ??? M/s, write rate: 130->50 M/s, ave 100 M/s
* WDC WD30EZRX-00MMMB0 - read rate: 130->50 M/s, ave 100 M/s; write rate: 130->50 M/s, ave 100 M/s
* WDC WD40EZRX-00SPEB0 - read rate: 150->70 M/s, ave 120 M/s, write rate: 150->70 Mbytes/sec
* WDC WD40EZRX-00SPEB0 - read rate: 150->70 M/s, ave 120 M/s; write rate: 150->70 Mbytes/sec
* WDC WD4001FAEX-00MJRA0 - read rate: 150->80 M/s, ave 130 M/s
* WDC WD4001FAEX-00MJRA0 - read rate: 150->80 M/s, ave 130 M/s
* WDC WD4003FZEX-00Z4SA0 - read rate: 180->100 M/s, ave 150 M/s
* WDC WD4003FZEX-00Z4SA0 - read rate: 180->100 M/s, ave 150 M/s
* WDC WD80PUZX - read: 180->90 M/s, write: 180->100 M/s
* ST8000VN0002 - read: 240->110 M/s, write: 220->110 M/s


= Raid interfaces =
= Disk interfaces =


== Marvell 88SE9485 (PCIe x8 to 8 SAS/SATA 6Gb/s Ports I/O Controller) ==
== Marvell 88SE9485 (PCIe x8 to 8 SAS/SATA 6Gb/s Ports I/O Controller) ==
Line 42: Line 44:
* conclusion: configuration with dual 88SE9230 does not saturate up to 8 disks.
* conclusion: configuration with dual 88SE9230 does not saturate up to 8 disks.


= Raid arrays =
= mdadm raid + xfs =
 
<pre>
dd if=/data/file of=/dev/null bs=1024000k
dd if=/dev/md6 of=/dev/null bs=1024000k
</pre>


* 4xWD40EZRX RAID5 Z87 88SE9230, 88SE9485: raw: read: 420 M/sec, write: ~400 M/s. xfs: read/write: 400->300 M/s (10min avg)
* 4xWD40EZRX RAID5 Z87 88SE9230, 88SE9485: raw: read: 420 M/sec, write: ~400 M/s. xfs: read/write: 400->300 M/s (10min avg)
Line 50: Line 57:
* 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID6 Z87 2x88SE9230: raw: read: 430 M/s ave first 30 sec, write: ???, xfs: write /dev/zero 860->400 M/s, 660 M/s ave across 24 TB, xfs read: ~500->~500 M/s ave across 20TB, strip_cache_size 32k, chunk 512
* 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID6 Z87 2x88SE9230: raw: read: 430 M/s ave first 30 sec, write: ???, xfs: write /dev/zero 860->400 M/s, 660 M/s ave across 24 TB, xfs read: ~500->~500 M/s ave across 20TB, strip_cache_size 32k, chunk 512
* 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID6 Z87 2x88SE9230: raw: read: 740 M/s ave first 30 sec (50% busy), write: 515 M/s, xfs: write /dev/zero 600->??? M/s, ??? M/s ave across ?? TB, xfs read: 830(40% busy)->??? M/s ave across ??TB, strip_cache_size 32k, chunk 256, internal bitmap, during resync - all wrong
* 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID6 Z87 2x88SE9230: raw: read: 740 M/s ave first 30 sec (50% busy), write: 515 M/s, xfs: write /dev/zero 600->??? M/s, ??? M/s ave across ?? TB, xfs read: 830(40% busy)->??? M/s ave across ??TB, strip_cache_size 32k, chunk 256, internal bitmap, during resync - all wrong
* 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID6 Z87 2x88SE9230: raw: read: 820 M/s ave first 30 sec (60% busy), write: 860 M/s, xfs: write /dev/zero 860(100% busy)->??? M/s, ??? M/s ave across ?? TB, xfs read: 830(50% busy)->??? M/s ave across ??TB, strip_cache_size 32k, chunk 128kiB, mkfs.xfs -d su=131072,sw=6
* 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID6 Z87 2x88SE9230: raw: read: 820 M/s ave first 30 sec (60% busy), write: 860 M/s, xfs: write /dev/zero 860(100% busy)->425 M/s, 660 M/s ave across 24 TB, xfs read: 830(50% busy)->420 M/s, 660 M/s ave across 24TB, strip_cache_size 32k, chunk 128kiB, mkfs.xfs -d su=131072,sw=6
* w=830;r=840,rq256/384 - sw=12
* w=830;r=840,rq256/384 - sw=12
* w=870,rq256/1300;r=840,rq256/384 - sw=24
* w=870,rq256/1300;r=840,rq256/384 - sw=24
* w=870,rq256/1300;r=840,rq256/384 - su=262144,sw=24
* w=870,rq256/1300;r=840,rq256/384 - su=262144,sw=24
* w=870,rq256/1300,r=660,rq256/384 - ext4
* 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID6 Z87 1x88SE9485: raw: read: 840 M/s ave first 30 sec (60% busy), write: ??? M/s, xfs: write /dev/zero ???(???% busy)->??? M/s, ??? M/s ave across 24 TB, xfs read: 840(60% busy)->??? M/s, ??? M/s ave across 24TB, strip_cache_size 32k, chunk 128kiB, mkfs.xfs -d su=131072,sw=6


* 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID0 Z87 2x88SE9230: raw: read: 1130 M/s ave first 30 sec (80% busy), write: 1200 M/s ave first 30 sec (100% busy), xfs: write /dev/zero 1150->??? M/s, ??? M/s ave across ?? TB, xfs read: 1150(70% busy)->??? M/s ave across ??TB, chunk 256kiB
* 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID0 Z87 2x88SE9230: raw: read: 1130 M/s ave first 30 sec (80% busy), write: 1200 M/s ave first 30 sec (100% busy), xfs: write /dev/zero 1150->??? M/s, ??? M/s ave across ?? TB, xfs read: 1150(70% busy)->??? M/s ave across ??TB, chunk 256kiB
Line 61: Line 70:
* 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID0 Z87 2x88SE9230: raw: read: 150 M/s ave first 30 sec (10% busy), write: 1200 M/s ave first 30 sec (100% busy), xfs: write /dev/zero 1200->??? M/s, ??? M/s ave across ?? TB, xfs read: 160(10% busy)->??? M/s ave across ??TB, chunk 8192kiB
* 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID0 Z87 2x88SE9230: raw: read: 150 M/s ave first 30 sec (10% busy), write: 1200 M/s ave first 30 sec (100% busy), xfs: write /dev/zero 1200->??? M/s, ??? M/s ave across ?? TB, xfs read: 160(10% busy)->??? M/s ave across ??TB, chunk 8192kiB


Note: assuming 150 M/s disks, max theoretical write speeds are:
= btrfs =
 
<pre>
dd if=/dev/zero of=/data/file bs=1024000k
dd if=/data/file of=/dev/null bs=1024000k
</pre>
 
* 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID6 Z87 1x88SE9485: btrfs -m=raid1 -d=raid6: write vmstat 1000 M/s, dd 820 M/s (100% busy), read: vmstat 840 M/s, dd 820 M/s (60% busy)
* same array, 2x88SE9230: read: vmstat 800 M/s, dd 800 M/s (60% busy)
 
= ZFS =
 
* mirror of 2x8TB (WD80PUZX and ST8000VN0002): write: 175->100 M/s, read: ..., scrub: ...
 
= max speeds =
 
Assuming 150 M/s disks, max theoretical write speeds are:
* 4 disk raid5 -> 3*150 -> 450 M/s
* 4 disk raid5 -> 3*150 -> 450 M/s
* 6 disk raid6 -> 4*150 -> 600 M/s
* 6 disk raid6 -> 4*150 -> 600 M/s
* 8 disk raid6 -> 6*150 -> 900 M/s
* 8 disk raid6 -> 6*150 -> 900 M/s
* 8 disk raid0 -> 8*150 -> 1200 M/s
* 8 disk raid0 -> 8*150 -> 1200 M/s

Latest revision as of 12:17, 7 January 2022

Individual disks

  • WDC WD30EZRX-00MMMB0 - read rate: 130->50 M/s, ave 100 M/s; write rate: 130->50 M/s, ave 100 M/s
  • WDC WD40EZRX-00SPEB0 - read rate: 150->70 M/s, ave 120 M/s; write rate: 150->70 Mbytes/sec
  • WDC WD4001FAEX-00MJRA0 - read rate: 150->80 M/s, ave 130 M/s
  • WDC WD4003FZEX-00Z4SA0 - read rate: 180->100 M/s, ave 150 M/s
  • WDC WD80PUZX - read: 180->90 M/s, write: 180->100 M/s
  • ST8000VN0002 - read: 240->110 M/s, write: 220->110 M/s

Disk interfaces

Marvell 88SE9485 (PCIe x8 to 8 SAS/SATA 6Gb/s Ports I/O Controller)

lspci: LnkSta: Speed 5GT/s, Width x8

  • per-port speed: full disk speed on each port
  • total read speed vs number of active disks, using 4TB disks:
    • 1 - 150 M/s - 1x
    • 2 - 273 M/s - 1.8x
    • 3 - 430 M/s - 2.8x
    • 4 - 544 M/s - 3.62x
    • 5 - 573 M/s - 3.82x
    • 6 - 593 M/s - 3.95x
    • 7 - 623 M/s - 4.15x
    • 8 - 640 M/s - 4.2x
  • conclusion: 88SE9485 saturates at 4 disks (correct PCI config confirmed - x8 at 5GT/s). Adding 4 more disks do not increase data rate.

Marvell 88SE9230 (4 SATA)

lspci:

  • LnkSta: Speed 5GT/s, Width x2 (on Z87 mobo)
  • LnkSta: Speed 5GT/s, Width x2 (plugin board)
  • per-port speed: full disk speed on each port
  • total read speed vs number of active disks, using 4TB disks:
    • 1 - 150 - 1x
    • 2 - 300 - 2x
    • 3 - 460 - 3x
    • 4 - 620 - 4.1x
    • 5 - 770 - 5.1x
    • 6 - 913 - 6.1x
    • 7 - 1100 - 7.3x (WD4003FZEX)
    • 8 - 1240 - 8.3x (WD4001FAEX)
  • conclusion: configuration with dual 88SE9230 does not saturate up to 8 disks.

mdadm raid + xfs

dd if=/data/file of=/dev/null bs=1024000k
dd if=/dev/md6 of=/dev/null bs=1024000k
  • 4xWD40EZRX RAID5 Z87 88SE9230, 88SE9485: raw: read: 420 M/sec, write: ~400 M/s. xfs: read/write: 400->300 M/s (10min avg)
  • 6xWD40EZRX RAID6 Z87 1x88SE9485: raw: .... xfs: write /dev/null 600->300 M/s, 443 M/s ave across 17TB, xfs read: 550->300 M/s, 440 M/s ave across 17TB, strip_cache_size 32k.
  • 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID6 Z87 1x88SE9485: raw: read: 673 M/s ave first 30 sec, write: ???, xfs: write /dev/zero 800->??? M/s, ??? M/s ave across ??? TB, xfs read: 530-620->??? M/s, ??? M/s ave across ??? TB, strip_cache_size 32k
  • 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID6 Z87 2x88SE9230: raw: read: 430 M/s ave first 30 sec, write: ???, xfs: write /dev/zero 860->400 M/s, 660 M/s ave across 24 TB, xfs read: ~500->~500 M/s ave across 20TB, strip_cache_size 32k, chunk 512
  • 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID6 Z87 2x88SE9230: raw: read: 740 M/s ave first 30 sec (50% busy), write: 515 M/s, xfs: write /dev/zero 600->??? M/s, ??? M/s ave across ?? TB, xfs read: 830(40% busy)->??? M/s ave across ??TB, strip_cache_size 32k, chunk 256, internal bitmap, during resync - all wrong
  • 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID6 Z87 2x88SE9230: raw: read: 820 M/s ave first 30 sec (60% busy), write: 860 M/s, xfs: write /dev/zero 860(100% busy)->425 M/s, 660 M/s ave across 24 TB, xfs read: 830(50% busy)->420 M/s, 660 M/s ave across 24TB, strip_cache_size 32k, chunk 128kiB, mkfs.xfs -d su=131072,sw=6
  • w=830;r=840,rq256/384 - sw=12
  • w=870,rq256/1300;r=840,rq256/384 - sw=24
  • w=870,rq256/1300;r=840,rq256/384 - su=262144,sw=24
  • w=870,rq256/1300,r=660,rq256/384 - ext4
  • 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID6 Z87 1x88SE9485: raw: read: 840 M/s ave first 30 sec (60% busy), write: ??? M/s, xfs: write /dev/zero ???(???% busy)->??? M/s, ??? M/s ave across 24 TB, xfs read: 840(60% busy)->??? M/s, ??? M/s ave across 24TB, strip_cache_size 32k, chunk 128kiB, mkfs.xfs -d su=131072,sw=6
  • 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID0 Z87 2x88SE9230: raw: read: 1130 M/s ave first 30 sec (80% busy), write: 1200 M/s ave first 30 sec (100% busy), xfs: write /dev/zero 1150->??? M/s, ??? M/s ave across ?? TB, xfs read: 1150(70% busy)->??? M/s ave across ??TB, chunk 256kiB
  • 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID0 Z87 2x88SE9230: raw: read: 1150 M/s ave first 30 sec (80% busy), write: 1150 M/s ave first 30 sec (100% busy), xfs: write /dev/zero 1150->??? M/s, ??? M/s ave across ?? TB, xfs read: 1150->??? M/s ave across ??TB, chunk 512kiB
  • 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID0 Z87 2x88SE9230: raw: read: 750 M/s ave first 30 sec (40% busy), write: 1200 M/s ave first 30 sec (100% busy), xfs: write /dev/zero 1200->??? M/s, ??? M/s ave across ?? TB, xfs read: 750(50% busy)->??? M/s ave across ??TB, chunk 1024kiB
  • 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID0 Z87 2x88SE9230: raw: read: 750 M/s ave first 30 sec (20% busy), write: 1200 M/s ave first 30 sec (100% busy), xfs: write /dev/zero 1200->??? M/s, ??? M/s ave across ?? TB, xfs read: 800(25% busy)->??? M/s ave across ??TB, chunk 2048kiB
  • 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID0 Z87 2x88SE9230: raw: read: 150 M/s ave first 30 sec (10% busy), write: 1200 M/s ave first 30 sec (100% busy), xfs: write /dev/zero 1200->??? M/s, ??? M/s ave across ?? TB, xfs read: 160(10% busy)->??? M/s ave across ??TB, chunk 8192kiB

btrfs

dd if=/dev/zero of=/data/file bs=1024000k
dd if=/data/file of=/dev/null bs=1024000k
  • 8 (6xWD40EZRX+WD4001FAEX+WDC WD4003FZEX) RAID6 Z87 1x88SE9485: btrfs -m=raid1 -d=raid6: write vmstat 1000 M/s, dd 820 M/s (100% busy), read: vmstat 840 M/s, dd 820 M/s (60% busy)
  • same array, 2x88SE9230: read: vmstat 800 M/s, dd 800 M/s (60% busy)

ZFS

  • mirror of 2x8TB (WD80PUZX and ST8000VN0002): write: 175->100 M/s, read: ..., scrub: ...

max speeds

Assuming 150 M/s disks, max theoretical write speeds are:

  • 4 disk raid5 -> 3*150 -> 450 M/s
  • 6 disk raid6 -> 4*150 -> 600 M/s
  • 8 disk raid6 -> 6*150 -> 900 M/s
  • 8 disk raid0 -> 8*150 -> 1200 M/s