Back Midas Rome Roody Rootana
  Midas DAQ System  Not logged in ELOG logo
Entry  20 Jun 2012, Konstantin Olchanski, Info, midas vme benchmarks lxdaq09cpu.giflxdaq09net.gifladd02cpu.gifladd02net.gif
    Reply  20 Jun 2012, Konstantin Olchanski, Info, midas vme benchmarks 
       Reply  24 Jun 2012, Konstantin Olchanski, Info, midas vme benchmarks Scalers_(1).gif
          Reply  25 Jun 2012, Stefan Ritt, Info, midas vme benchmarks 
             Reply  25 Jun 2012, Konstantin Olchanski, Info, midas vme benchmarks 
          Reply  26 Jun 2012, Konstantin Olchanski, Info, midas vme benchmarks canvas.pdf
             Reply  26 Jun 2012, Konstantin Olchanski, Info, midas vme benchmarks Scalers.gifladd02-cpu.pngladd02-net.pngcanvas-1000-100Hz.pdf
    Reply  21 Jun 2012, Stefan Ritt, Info, midas vme benchmarks Screen_Shot_2012-06-21_at_10.14.09_.png
       Reply  21 Jun 2012, Konstantin Olchanski, Info, midas vme benchmarks 
          Reply  22 Jun 2012, Stefan Ritt, Info, midas vme benchmarks 
             Reply  24 Jun 2012, Konstantin Olchanski, Info, midas vme benchmarks 
Message ID: 817     Entry time: 26 Jun 2012     In reply to: 816
Author: Konstantin Olchanski 
Topic: Info 
Subject: midas vme benchmarks 
> > > > I am recording here the results from a test VME system using four VF48 
waveform digitizers

Last message from this series. After all the tuning, I reduce the trigger rate 
from 120 Hz to 100 Hz to see
what happens when the backend computer is not overloaded and has some spare 
capacity.

event rate: 100 Hz (down from 120 Hz)
data rate: 37 Mbytes/sec (down from 50 M/s)
mlogger cpu use: 65% (down from 99%)

Attached:

1) trigger rate event plot: now the rate is solid 100 Hz without dropouts
2) CPU and Network plots frog ganglia: the spikes is lazylogger saving mid.gz 
files to HDFS storage
3) time structure plots:
a) trigger latency: mean 5 us, most below 10 us, 59 events (0.046%) longer than 
100 us, (bottom left graph) 7000 us is longest latency observed.
b) readout time is 7000-8000 us (same as before - VME data rate is independant 
from the trigger rate)
c) busy time: mean 7.2 us, 12 events (0.0094%) longer than 10 ms, longest busy 
time ever observed is 17 ms (bottom middle graph)
d) time between events is 10 ms (100 Hz pulser trigger), 1 event was missed 
about 10 times (spike at 20 ms) (0.0085%), more than 1 event missed never (no 
spike at 30 ms, 40 ms, etc).


CPU use on the backend computer:

top - 16:30:59 up 75 days, 35 min,  6 users,  load average: 0.98, 0.99, 1.01
Tasks: 206 total,   3 running, 203 sleeping,   0 stopped,   0 zombie
Cpu(s): 39.3%us,  8.2%sy,  0.0%ni, 39.4%id,  5.7%wa,  0.3%hi,  7.2%si,  0.0%st
Mem:   3925556k total,  3404192k used,   521364k free,     8792k buffers
Swap: 32766900k total,   296304k used, 32470596k free,  2477268k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
 5826 trinat    20   0  441m 292m 287m R 65.8  7.6   2215:16 mlogger            
26756 trinat    20   0  310m 288m 288m S 16.8  7.5  34:32.03 mserver            
29005 olchansk  20   0  206m  39m  17m R 14.7  1.0  26:19.42 ana_vf48.exe       
 7878 olchansk  20   0   99m 3988  740 S  7.7  0.1  27:06.34 sshd               
29012 trinat    20   0  314m 288m 288m S  2.8  7.5   4:22.14 mserver            
23317 root      20   0     0    0    0 S  1.4  0.0  24:21.52 flush-9:3     


K.O.
Attachment 1: Scalers.gif  5 kB  | Show | Hide all | Show all
Attachment 2: ladd02-cpu.png  12 kB  | Show | Hide all | Show all
Attachment 3: ladd02-net.png  12 kB  | Hide | Hide all | Show all
ladd02-net.png
Attachment 4: canvas-1000-100Hz.pdf  22 kB  | Hide | Hide all | Show all
canvas-1000-100Hz.pdf
ELOG V3.1.4-2e1708b5