ID |
Date |
Author |
Topic |
Subject |
1113
|
16 Sep 2015 |
Konstantin Olchanski | Info | midas wiki upgraded | The midas wiki at https://midas.triumf.ca has been upgraded to mediawiki version 1.25.2 (current
production version). If you see any problems, please report them on this forum. K.O. |
1544
|
07 Jun 2019 |
Konstantin Olchanski | Forum | midas wiki updated to mediawiki 1.27.7 | the midas wiki was updated to the latest LTS point release 1.27.7, the latest (last?) security update.
mediawiki series 1.27 is now officially EOL, see
https://lists.wikimedia.org/pipermail/mediawiki-announce/2019-June/000231.html
they recommend that all users upgrade to the current LTS series 1.31.
for us it means moving the wiki from the present el6 (SL6) computer to
a more up-to-date platform (el8 or ubuntu LTS 18.04).
K.O. |
1538
|
03 Jun 2019 |
Konstantin Olchanski | Forum | midas wiki updated to mediawiki 1.27.5 | the midas wiki was updated to the latest LTS point release 1.27.5.
Also, an installation error was fixed that prevented confirmation of new accounts (git checkout
REL1_28 instead of REL1_27, resulting in a version mismatch).
Support for MediaWiki LTS release 1.27 ends this Summer.
Next LTS release series is 1.31, see https://en.wikipedia.org/wiki/MediaWiki_version_history
This version requires php version 7 or newer which comes standard with ubuntu LTS 18.04
and el8 (RHEL8), but not with el6 (SL6) and el7 (CentOS-7).
I guess we shall start planning this upgrade and the move of the wiki to a new host machine.
K.O. |
1222
|
01 Dec 2016 |
Konstantin Olchanski | Info | midas wiki updated to mediawiki 1.27.1 | midas wiki at https://midas.triumf.ca/MidasWiki/index.php/Main_Page
was updated to MediaWiki version 1.27.1, the current MediaWiki LTS release.
Everything should work as before, but if you see any problems or anomalies, please report
them on this forum here.
K.O. |
982
|
14 Mar 2014 |
Konstantin Olchanski | Info | midas wiki updated to mediawiki 1.22.4 | The midas wiki at https://midas.triumf.ca was updated to mediawiki 1.22.4 - the latest production version.
If you see any problems, please report them to this elog. K.O. |
1142
|
20 Nov 2015 |
Konstantin Olchanski | Info | midas wiki doxygen documentation links | I updated the links on the midas wiki to the doxygen-generated documentation for MIDAS that you
get after running "git clone midas; cd midas; make dox; firefox html/index.html".
Correct link is:
https://daq.triumf.ca/~daqweb/doc/midas-devel/html/
This takes you to a daily/nightly generated snapshot of the midas develop branch and the
generated documentation with full call graphs.
Previous links were deficient is different ways:
- referred to http://ladd00 instead of https://daq
- referred to wrong path ~daqweb/doc/midas instead of ~daqweb/doc/midas-devel
- referred to the obsolete doxygen generator in midas/doc/html instead of midas/html.
If wrong links are still present on the midas wiki, please let us know and we will fix them.
K.O. |
2583
|
16 Aug 2023 |
Konstantin Olchanski | Bug Report | midas wants to show notification? | I started to get web browser popups about "midas wants to show notifications,
block/allow/x". is this a glitch or a new unannounced/undocumented feature?
google chrome on macos. K.O. |
2584
|
16 Aug 2023 |
Stefan Ritt | Bug Report | midas wants to show notification? | > I started to get web browser popups about "midas wants to show notifications,
> block/allow/x". is this a glitch or a new unannounced/undocumented feature?
> google chrome on macos. K.O.
https://bitbucket.org/tmidas/midas/commits/e101dea764c647211c560a68db7ecda1834198db
I did not consider this a significant feature to be announced here. Just a few lines
of code. You can turn it on/off via the "Config" web page.
Stefan |
2585
|
16 Aug 2023 |
Stefan Ritt | Bug Report | midas wants to show notification? | > > I started to get web browser popups about "midas wants to show notifications,
> > block/allow/x". is this a glitch or a new unannounced/undocumented feature?
> > google chrome on macos. K.O.
>
> https://bitbucket.org/tmidas/midas/commits/e101dea764c647211c560a68db7ecda1834198db
>
> I did not consider this a significant feature to be announced here. Just a few lines
> of code. You can turn it on/off via the "Config" web page.
>
> Stefan
Now as I look at it again I realized that the config check boxes had a bug. I fixed that
and now the disable should work correctly.
This feature was asked by some people who monitor an experiment and have the browser window
in the background, also have sound off (large office). So desktop notifications are a good
thing for them.
Stefan |
2586
|
16 Aug 2023 |
Konstantin Olchanski | Bug Report | midas wants to show notification? | > This feature was asked by some people ...
"show notifications" popups are strongly associated with disreputable web sites (presumably to
push spam), it was surprising to see it from midas.
K.O. |
2589
|
17 Aug 2023 |
Stefan Ritt | Bug Report | midas wants to show notification? | > > This feature was asked by some people ...
>
> "show notifications" popups are strongly associated with disreputable web sites (presumably to
> push spam), it was surprising to see it from midas.
>
> K.O.
I agree. But unlike emails (where you get lots of spam as well), you can nicely blacklist/whitelist
desktop notifications. I suppress all of them except the one for MIDAS. This allows me to watch our
experiment without staring on the web page all the time.
The main question here is maybe if the desktop notification should be on or off by default (for a
fresh browser). While you always can change that via the mhttpd "Config" page, the default value is
chosen by the system. I thought I put it to "on" so people can experience it, and then turn it off if
they don't like. Having them off by default, most people never would notice this possibility. But I'm
open to a discussion here.
Stefan |
805
|
20 Jun 2012 |
Konstantin Olchanski | Info | midas vme benchmarks | I am recording here the results from a test VME system using two VF48 waveform digitizers and a 64-bit
dual-core VME processor (V7865). VF48 data suppression is off, VF48 modules set to read 48 channels,
1000 ADC samples each. mlogger data compression is enabled (gzip -1).
Event rate is about 200/sec
VME Data rate is about 40 Mbytes/sec
System is 100% busy (estimate)
System utilization of host computer (dual-core 2.2GHz, dual-channel DDR333 RAM):
(note high CPU use by mlogger for gzip compression of midas files)
top - 12:23:45 up 68 days, 20:28, 3 users, load average: 1.39, 1.22, 1.04
Tasks: 193 total, 3 running, 190 sleeping, 0 stopped, 0 zombie
Cpu(s): 32.1%us, 6.2%sy, 0.0%ni, 54.4%id, 2.7%wa, 0.1%hi, 4.5%si, 0.0%st
Mem: 3925556k total, 3797440k used, 128116k free, 1780k buffers
Swap: 32766900k total, 8k used, 32766892k free, 2970224k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5169 trinat 20 0 246m 108m 97m R 64.3 2.8 29:36.86 mlogger
5771 trinat 20 0 119m 98m 97m R 14.9 2.6 139:34.03 mserver
6083 root 20 0 0 0 0 S 2.0 0.0 0:35.85 flush-9:3
1097 root 20 0 0 0 0 S 0.9 0.0 86:06.38 md3_raid1
System utilization of VME processor (dual-core 2.16 GHz, single-channel DDR2 RAM):
(note the more than 100% CPU use of multithreaded fevme)
top - 12:24:49 up 70 days, 19:14, 2 users, load average: 1.19, 1.05, 1.01
Tasks: 103 total, 1 running, 101 sleeping, 1 stopped, 0 zombie
Cpu(s): 6.3%us, 45.1%sy, 0.0%ni, 47.7%id, 0.0%wa, 0.2%hi, 0.6%si, 0.0%st
Mem: 1019436k total, 866672k used, 152764k free, 3576k buffers
Swap: 0k total, 0k used, 0k free, 20976k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
19740 trinat 20 0 177m 108m 984 S 104.5 10.9 1229:00 fevme_gef.exe
1172 ganglia 20 0 416m 99m 1652 S 0.7 10.0 1101:59 gmond
32353 olchansk 20 0 19240 1416 1096 R 0.2 0.1 0:00.05 top
146 root 15 -5 0 0 0 S 0.1 0.0 42:52.98 kslowd001
Attached are the CPU and network ganglia plots from lxdaq09 (VME) and ladd02 (host).
The regular bursts of "network out" on ladd02 is lazylogger writing mid.gz files to HADOOP HDFS.
K.O. |
Attachment 1: lxdaq09cpu.gif
|
|
Attachment 2: lxdaq09net.gif
|
|
Attachment 3: ladd02cpu.gif
|
|
Attachment 4: ladd02net.gif
|
|
806
|
20 Jun 2012 |
Konstantin Olchanski | Info | midas vme benchmarks | > I am recording here the results from a test VME system using two VF48 waveform digitizers
Note 1: data compression is about 89% (hence "data to disk" rate is much smaller than the "data from VME" rate)
Note 2: switch from VME MBLT64 block transfer to 2eVME block transfer:
- raises the VME data rate from 40 to 48 M/s
- event rate from 220/sec to 260/sec
- mlogger CPU use from 64% to about 80%
This is consistent with the measured VME block transfer rates for the VF48 module: MBLT64 is about 40 M/s, 2eVME is about 50 M/s (could be
80 M/s if no clock cycles were lost to sync VME signals with the VF48 clocks), 2eSST is implemented but impossible - VF48 cannot drive the
VME BERR and RETRY signals. Evil standards, grumble, grumble, grumble).
K.O. |
807
|
21 Jun 2012 |
Stefan Ritt | Info | midas vme benchmarks | Just for completeness: Attached is the VME transfer speed I get with the SIS3100/SIS1100 interface using
2eVME transfer. This curve can be explained exactly with an overhead of 125 us per DMA transfer and a
continuous link speed of 83 MB/sec. |
Attachment 1: Screen_Shot_2012-06-21_at_10.14.09_.png
|
|
809
|
21 Jun 2012 |
Konstantin Olchanski | Info | midas vme benchmarks | > Just for completeness: Attached is the VME transfer speed I get with the SIS3100/SIS1100 interface using
> 2eVME transfer. This curve can be explained exactly with an overhead of 125 us per DMA transfer and a
> continuous link speed of 83 MB/sec.
What VME module is on the other end?
K.O. |
810
|
22 Jun 2012 |
Stefan Ritt | Info | midas vme benchmarks | > > Just for completeness: Attached is the VME transfer speed I get with the SIS3100/SIS1100 interface using
> > 2eVME transfer. This curve can be explained exactly with an overhead of 125 us per DMA transfer and a
> > continuous link speed of 83 MB/sec.
>
> What VME module is on the other end?
>
> K.O.
The PSI-built DRS4 board, where we implemented the 2eVME protocol in the Virtex II FPGA. The same speed can be obtained with the commercial
VME memory module CI-VME64 from Chrislin Industries (see http://www.controlled.com/vme/chinp1.html).
Stefan |
812
|
24 Jun 2012 |
Konstantin Olchanski | Info | midas vme benchmarks | > > > Just for completeness: Attached is the VME transfer speed I get with the SIS3100/SIS1100 interface using
> > > 2eVME transfer. This curve can be explained exactly with an overhead of 125 us per DMA transfer and a
> > > continuous link speed of 83 MB/sec.
>
> [with ...] the PSI-built DRS4 board, where we implemented the 2eVME protocol in the Virtex II FPGA.
This is an interesting hardware benchmark. Do you also have benchmarks of the MIDAS system using the DRS4 (measurements
of end-to-end data rates, maximum event rate, maximum trigger rate, any tuning of the frontend program
and of the MIDAS experiment to achieve those rates, etc)?
K.O. |
813
|
24 Jun 2012 |
Konstantin Olchanski | Info | midas vme benchmarks | > > I am recording here the results from a test VME system using two VF48 waveform digitizers
(I now have 4 VF48 waveform digitizers, so the event rates are half of those reported before. Date rate
is up to 51 M/s - event size has doubled, per-event overhead is the same, so the effective data rate goes
up).
This message demonstrates the effects of tuning the MIDAS system for high rate data taking.
Attached is the history plot of the event rate counters which show the real-time performance of the MIDAS
system with better detail compared to the average event rate reported on the MIDAS status page. For an
ideal real-time system, the event rate should be a constant, without any drop-outs.
Seen on the plot:
run 75: the periodic dropouts in the event rate correspond to the lazylogger writing data into HADOOP
HDFS. Clearly the host computer cannot keep up with both data taking and data archiving at the same
time. (see the output of "top" "with HDFS" and "without HDFS" below)
run 76: SYSTEM buffer size increased from 100Mbytes to 300Mbytes. Maybe there is an improvement.
run 77-78: "event_buffer_size" inside the multithreaded (EQ_MULTITHREAD) VME frontend increased from
100Mbytes to 300Mbytes. (6 seconds of data at 50M/s). Much better, yes?
Conclusion: for improved real-time performance, there should be sufficient buffering between the VME
frontend readout thread and the mlogger data compression thread.
For benchmark hardware, at 50M/s, 4 seconds of buffer space (100M in the SYSTEM buffer and 100M in
the frontend) is not enough. 12 seconds of buffer space (300+300) is much better. (Or buy a faster
backend computer).
P.S. HDFS data rate as measured by lazylogger is around 20M/s for CDH3 HADOOP and around 30M/s for
CDH4 HADOOP.
P.S. Observe the ever present unexplained event rate fluctuations between 130-140 event/sec.
K.O.
---- "top" output during normal data taking, notice mlogger data compression consumes 99% CPU at 51
M/s data rate.
top - 08:55:22 up 72 days, 17:00, 5 users, load average: 2.47, 2.32, 2.27
Tasks: 206 total, 2 running, 204 sleeping, 0 stopped, 0 zombie
Cpu(s): 52.2%us, 6.1%sy, 0.0%ni, 34.4%id, 0.8%wa, 0.1%hi, 6.2%si, 0.0%st
Mem: 3925556k total, 3064928k used, 860628k free, 3788k buffers
Swap: 32766900k total, 200704k used, 32566196k free, 2061048k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5826 trinat 20 0 437m 291m 287m R 97.6 7.6 636:39.63 mlogger
27617 trinat 20 0 310m 288m 288m S 24.6 7.5 6:59.28 mserver
1806 ganglia 20 0 415m 62m 1488 S 0.9 1.6 668:43.55 gmond
--- "top" output during lazylogger/HDFS activity. Observe high CPU use by lazylogger and fuse_dfs (the
HADOOP HDFS client). Observe that CPU use adds up to 167% out of 200% available.
top - 08:57:16 up 72 days, 17:01, 5 users, load average: 2.65, 2.35, 2.29
Tasks: 206 total, 2 running, 204 sleeping, 0 stopped, 0 zombie
Cpu(s): 57.6%us, 23.1%sy, 0.0%ni, 8.1%id, 0.0%wa, 0.4%hi, 10.7%si, 0.0%st
Mem: 3925556k total, 3642136k used, 283420k free, 4316k buffers
Swap: 32766900k total, 200692k used, 32566208k free, 2597752k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5826 trinat 20 0 437m 291m 287m R 68.7 7.6 638:24.07 mlogger
23450 root 20 0 1849m 200m 4472 S 64.4 5.2 75:35.64 fuse_dfs
27617 trinat 20 0 310m 288m 288m S 18.5 7.5 7:22.06 mserver
26723 trinat 20 0 38720 11m 1172 S 17.9 0.3 22:37.38 lazylogger
7268 trinat 20 0 1007m 35m 4004 D 1.3 0.9 187:14.52 nautilus
1097 root 20 0 0 0 0 S 0.8 0.0 101:45.55 md3_raid1 |
Attachment 1: Scalers_(1).gif
|
|
814
|
25 Jun 2012 |
Stefan Ritt | Info | midas vme benchmarks | > P.S. Observe the ever present unexplained event rate fluctuations between 130-140 event/sec.
An important aspect of optimizing your system is to keep the network traffic under control. I use GBit Ethernet between FE and BE, and make sure the switch
can accomodate all accumulated network traffic through its backplane. This way I do not have any TCP retransmits which kill you. Like if a single low-level
ethernet packet is lost due to collision, the TCP stack retransmits it. Depending on the local settings, this can be after a timeout of one (!) second, which
punches already a hole in your data rate. On the MSCB system actually I use UDP packets, where I schedule the retransmit myself. For a LAN, 10-100ms timeout
is there enough. The one second is optimized for a WAN (like between two continents) where this is fine, but it is not what you want on a LAN system. Also
make sure that the outgoing traffic (lazylogger) uses a different network card than the incoming traffic. I found that this also helps a lot.
- Stefan |
815
|
25 Jun 2012 |
Konstantin Olchanski | Info | midas vme benchmarks | > > P.S. Observe the ever present unexplained event rate fluctuations between 130-140 event/sec.
>
> An important aspect of optimizing your system is to keep the network traffic under control. I use GBit Ethernet between FE and BE, and make sure the switch
> can accomodate all accumulated network traffic through its backplane. This way I do not have any TCP retransmits which kill you. Like if a single low-level
> ethernet packet is lost due to collision, the TCP stack retransmits it. Depending on the local settings, this can be after a timeout of one (!) second, which
> punches already a hole in your data rate. On the MSCB system actually I use UDP packets, where I schedule the retransmit myself. For a LAN, 10-100ms timeout
> is there enough. The one second is optimized for a WAN (like between two continents) where this is fine, but it is not what you want on a LAN system. Also
> make sure that the outgoing traffic (lazylogger) uses a different network card than the incoming traffic. I found that this also helps a lot.
>
In typical applications at TRIUMF we do not setup a private network for the data traffic - data from VME to backend computer
and data from backend computer to DCACHE all go through the TRIUMF network.
This is justified by the required data rates - the highest data rate experiment running right now is PIENU - running
at about 10 M/s sustained, nominally April through December. (This is 20% of the data rate of the present benchmark).
The next highest data rate experiment is T2K/ND280 in Japan running at about 20 M/s (neutrino beam, data rate
is dominated by calibration events).
All other experiments at TRIUMF run at lower data rates (low intensity light ion beams), but we are planning for an experiment
that will run at 300 M/s sustained over 1 week of scheduled beam time.
But we do have the technical capability to separate data traffic from the TRIUMF network - the VME processors and
the backend computers all have dual GigE NICs.
(I did not say so, but obviously the present benchmark at 50 M/s VME to backend and 20-30 M/s from backend to HDFS is a GigE network).
(I am not monitoring the TCP loss and retransmit rates at present time)
(The network switch between VME and backend is a "the cheapest available" rackmountable 8-port GigE switch. The network between
the backend and the HDFS nodes is mostly Nortel 48-port GigE edge switches with single-GigE uplinks to the core router).
K.O. |
|