Back Midas Rome Roody Rootana
  Midas DAQ System  Not logged in ELOG logo
Entry  11 Jul 2014, Konstantin Olchanski, Info, MIDAS high speed test 
    Reply  06 Aug 2014, Konstantin Olchanski, Info, MIDAS high speed test 
Message ID: 1014     Entry time: 11 Jul 2014     Reply to this: 1019
Author: Konstantin Olchanski 
Topic: Info 
Subject: MIDAS high speed test 
We have tested operation of MIDAS using a 10GigE network connection. Using a dummy frontend 
generating fake data, we can record MIDAS data to disk at at least 700 Mbytes/sec as reported by 
the MIDAS status page.

Two configurations were tested, both run at at least 700Mbytes/sec sustained:

1) MIDAS mhttpd, mserver, mlogger running on the disk server machine (mlogger writes to local 
disk), frontend running on remote machine (10GigE mserver connection).
2) MIDAS mhttpd, mserver, mlogger, frontend running on remote machine (mlogger writes data to 
an NFS-mounted disk over a 10GigE connection).

In addition, for configuration (2), I simulated online analysis reading fresh midas files at the same 
time as MIDAS writes new data. The resulting observation is that Linux seems to be giving main 
priority to disk write traffic (700 Mbytes/sec) with the remaining disk bandwidth given to read traffic 
(50-100Mbytes/sec). In other words, when running online data analysis on fresh data files, mlogger 
continues to run at full speed (analysis does not slow down data taking).

A few problems with MIDAS were observed during this test:

a) mlogger data compression using gzip-1 has to be turned off (limits data rate to about 
200Mbytes/sec). We plan to implement high speed LZO/LZ4 data compression that we expect to 
keep up with a 10GigE network interface.
b) CPU use by mserver and mlogger is rather high (about 40% CPU)
c) when writing to the NFS disk, mlogger has a pause of 1-2 seconds when closing and reopening 
subrun data files. To avoid a interruption in data taking, the SYSTEM event buffer has to be big 
enough to ride through this pause, but stock MIDAS limits the maximum size of event buffer to 1GB 
(too small), this can be easily increased to 2GB (almost big enough) and with some more work it can 
be increased to 4GB, but no more because the buffer length is a 32-bit integer.
d) when writing to the NFS disk, we also see periodic 3-5 second interruptions ("write operation took 
5123 ms") and we had one death of mlogger by a timeout of 60 sec.

Details of the hardware:

1) the disk server machine CPU is 3.4GHz Intel i7-4770, mobo is ASUS Z87 WS (10 SATA, 2xGigE), 
RAM is 32GB DDR3-1600.
2) disk array is 8x4TB Seagate ST4000VN000-1H4168 NAS disks RAID0 (striped) configuration, raw 
data read/write rate is around 1 GByte/sec, disks are directly attached to mobo (no raid card), linux 
software raid.
3) the frontend machine CPU is 3.7GHz Intel i7-4820, mobo is ASUS P9X79 WS, RAM is 32GB DDR3-
1600.
4) 10GigE network is Solarflare Communications SFC9120 (both machines) with a cross-over fiber 
cable (direct connection,no switches)
5) OS is up-to-date SL6.5 (both machines)

K.O.
ELOG V3.1.4-2e1708b5