ID |
Date |
Author |
Topic |
Subject |
958
|
11 Feb 2014 |
Stefan Ritt | Bug Report | mhttpd, etc. |
Andreas Suter wrote: | I found a couple of bugs in the current mhttpd, midas version: "93fa5ed" |
See my reply on the issue tracker:
https://bitbucket.org/tmidas/midas/issue/18/mhttpd-bugs |
957
|
11 Feb 2014 |
Stefan Ritt | Forum | Huge events (>10MB) every second or so | > I'm looking into using MIDAS for an experiment that creates one large event
> (20MB or more) every second.
>
> Q1: It looks like I should use EQ_FRAGMENTED. Has this feature been in use
> recently? Is it known to work/not work?
>
> More specifically, the computer should initiate a 1 second data taking, start to
> such the data out of the electronics (which may take a while), change some
> experimental parameters, and start over.
>
> Q2: What's the best way to do this? EQ_PERIODIC?
> I cannot guarantee that the time required to read the hardware has an upper bound.
> In a standalone-prog I would simply use a big loop and let the machine execute
> it as fast as it can: 1.1s, 1.5s, 1.1s, 1.3s, 2.5s, ..... depending on the HW
> deadtimes.
> Will this work with EQ_PERIODIC?
>
> (Sorry for these maybe stupid questions, but I have so far only used MIDAS for
> externally generated events, with <32kB event size).
>
>
> Thanks a lot,
>
> Randolf
Hi Randolf,
EQ_FRAGMENTED is kind of historically, when computers had a few MB of memory and you have to play special tricks to get large data buffers through. Today I
would just use EQ_PERIODIC and increase the midas maximal event size to your needs. For details look here:
https://midas.triumf.ca/MidasWiki/index.php/Event_Buffer
The front-end scheduler is asynchronous, which means that your readout is called when the given period (1 second) is elapsed. If the readout takes longer
than 1s, the schedule will (hopefully) call your readout immediately after the event has been sent. So you get automatically your maximal data rate. At MEG, we
use 2 MB events with 10 Hz, so a 20 MB/sec data rate should not be a problem on decent computers.
Best,
Stefan |
956
|
11 Feb 2014 |
Randolf Pohl | Forum | Huge events (>10MB) every second or so | I'm looking into using MIDAS for an experiment that creates one large event
(20MB or more) every second.
Q1: It looks like I should use EQ_FRAGMENTED. Has this feature been in use
recently? Is it known to work/not work?
More specifically, the computer should initiate a 1 second data taking, start to
such the data out of the electronics (which may take a while), change some
experimental parameters, and start over.
Q2: What's the best way to do this? EQ_PERIODIC?
I cannot guarantee that the time required to read the hardware has an upper bound.
In a standalone-prog I would simply use a big loop and let the machine execute
it as fast as it can: 1.1s, 1.5s, 1.1s, 1.3s, 2.5s, ..... depending on the HW
deadtimes.
Will this work with EQ_PERIODIC?
(Sorry for these maybe stupid questions, but I have so far only used MIDAS for
externally generated events, with <32kB event size).
Thanks a lot,
Randolf |
955
|
11 Feb 2014 |
Andreas Suter | Bug Report | mhttpd, etc. | I found a couple of bugs in the current mhttpd, midas version: "93fa5ed"
This concerns all browser I checked (firefox, chrome, internet explorer, opera)
1) When trying to change a value of a frontend using a multi class driver (we
have a lot of them), the field for changing appears, but I cannot get it set!
Neither via the two set buttons (why 2?) nor via return.

It also would be nice, if the css could be changed such that input/output for
multi-driver would be better separated; something along as suggested in

2) If I changing a value (generic/hv class driver), the index of the array
remains when chaning a value until the next update of the page

3) We are using a web-password. In the current version the password is plain visible when entering.
4) I just copied the header as described here: https://midas.triumf.ca/elog/Midas/908, but I get another result:

It looks like as a wrong cookie is filtered? |
954
|
05 Feb 2014 |
Stefan Ritt | Bug Report | MIDAS password protection is broken | > If you follow the MIDAS documentation for setting up password protection, you will get strange messages:
This is interesting. When I used it last time (some years ago...) it worked fine. I did not touch this, and now it's broken. Must be related to some modifications of the semaphore system.
Well, anyhow, the problem seems to me the db_close_all_databses() and the re-opening of the ODB. Apparently the db_close_database() call does not clean up the semaphores properly.
Actually there is absolutely no need to close and re-open the ODB upon a wrong password, so I just removed that code and now it works again.
/Stefan |
953
|
05 Feb 2014 |
Stefan Ritt | Bug Report | MIDAS Web password broken | > The MIDAS Web password function is broken - with the web password enabled, I am not prompted for a
> password when editing ODB. The password still partially works - I am prompted for the web password
> when starting a run. K.O.
>
> P.S. https://midas.triumf.ca/MidasWiki/index.php/Security says "web password" needed for "write access",
> but does not specify if this includes editing odb. (I would think so, and I think I remember that it used to).
Didn't we agree to put those issues into the bitbucket issue tracker?
This functionality got broken when implementing the new inline edit functionality. Actually one has to "manually" check for the password. The old way
was that there web page asking for the web password, but if we do ODBSet via Ajax there is nobody who could fill out that form. So I added a
"manual" checking into ODBCheckWebPassword(). This will not work for custom pages, but they have their own way to define passwords.
/Stefan |
952
|
31 Jan 2014 |
Stefan Ritt | Info | Separation of MSCB subtree | Since several projects at PSI need MSCB but not MIDAS, I decided to separate the two repositories. So if you
need MIDAS with MSCB support inside mhttpd, you have to clone MIDAS, MXML and MSCB from bitbucket
(or the local clone at TRIUMF) as described in
https://midas.triumf.ca/MidasWiki/index.php/Main_Page#Download
I tried to fix all Makefiles to link to the new locations, but I'm not sure if I got all. So if something does not
compile please let me know.
-Stefan |
951
|
30 Jan 2014 |
Stefan Ritt | Bug Fix | make dox | > The capability to generate doxygen documentation of MIDAS was restored.
>
> Use "make dox" and "make cleandox",
> find generated documentation in ./html,
> look at it via "firefox html/index.html".
>
> The documentation is not generated by default - it takes quite a long time to build all the call graphs.
>
> And the call graphs is what makes this documentation useful - without some visual graphical
> representation it is quite difficult to understand some parts of MIDAS. Both caller and callee graphs are
> generated.
>
> Note that doxygen documentation for the javascript functions in mhttpd.js is also generated, making a
> handy reference in addition to the full documentation on the MIDAS Wiki.
>
> K.O.
To generate the files, you need doxygen installed which not everybody has. Is there a web site where one can see the generated graphs?
/Stefan |
950
|
29 Jan 2014 |
Konstantin Olchanski | Bug Fix | make dox | The capability to generate doxygen documentation of MIDAS was restored.
Use "make dox" and "make cleandox",
find generated documentation in ./html,
look at it via "firefox html/index.html".
The documentation is not generated by default - it takes quite a long time to build all the call graphs.
And the call graphs is what makes this documentation useful - without some visual graphical
representation it is quite difficult to understand some parts of MIDAS. Both caller and callee graphs are
generated.
Note that doxygen documentation for the javascript functions in mhttpd.js is also generated, making a
handy reference in addition to the full documentation on the MIDAS Wiki.
K.O. |
949
|
16 Jan 2014 |
Konstantin Olchanski | Info | MIDAS and "international characters", UTF-8 and Unicode. | I made some tests of MIDAS support for "international characters" and we seem to be in a reasonable
shape.
The standard standard is UTF-8 encoding of Unicode and the MIDAS core is believed to be UTF-8 clean -
one can use "international characters" in ODB names, in ODB values, in filenames, etc.
The web interface had some problems with percent-encoding of ODB URLs, but as of current git version,
everything seems to work okey, as long as the web browser is in the UTF-8 encoding mode. The default
mode is "Western ISO-8859-1" and javascript encodeURIComponent() is mangling some stuff making the
ODB editor not work. Switching to UTF-8 mode seems to fix that.
Perhaps we should make the UTF-8 encoding the default for mhttpd-generated web pages. This should be
okey for TRIUMF - we use English language almost exclusively, but need to check with other labs before
making such a change. I especially worry about PSI because I am not sure if and how they any of the special
German-language characters.
On the minus side, odbedit does not seem to accept non-English characters at all. Maybe it is easy to fix.
K.O. |
948
|
15 Jan 2014 |
Konstantin Olchanski | Bug Fix | Fixed spurious symlinks to midas.log | In some experiments (i.e. DEAP), we see spurious symlinks to midas.log scattered just about everywhere. I
now traced this to an uninitialized variable in cm_msg_log() and it should be fixed now. K.O. |
947
|
15 Jan 2014 |
Konstantin Olchanski | Bug Report | MIDAS password protection is broken | > I through to improve this by fixing a bug in cm_msg_log() (where the messages are coming from)
The periodic messages about broken semaphore actually come from al_check(). I put some whining there, too.
K.O. |
946
|
15 Jan 2014 |
Konstantin Olchanski | Bug Report | MIDAS Web password broken | The MIDAS Web password function is broken - with the web password enabled, I am not prompted for a
password when editing ODB. The password still partially works - I am prompted for the web password
when starting a run. K.O.
P.S. https://midas.triumf.ca/MidasWiki/index.php/Security says "web password" needed for "write access",
but does not specify if this includes editing odb. (I would think so, and I think I remember that it used to). |
945
|
15 Jan 2014 |
Konstantin Olchanski | Bug Report | MIDAS password protection is broken | If you follow the MIDAS documentation for setting up password protection, you will get strange messages:
ladd00:midas$ ./linux/bin/odbedit
[local:testexpt:S]/>passwd <---- setup a password
Password:
Retype password:
[local:testexpt:S]/> exit
ladd00:midas$ odbedit
Password: <---- enter correct password here
ss_semaphore_wait_for: semop/semtimedop(21135376) returned -1, errno 22 (Invalid argument)
ss_semaphore_release: semop/semtimedop(21135376) returned -1, errno 22 (Invalid argument)
[local:testexpt:S]/>ss_semaphore_wait_for: semop/semtimedop(21037069) returned -1, errno 43 (Identifier removed)
The same messages will appear from all other programs - mhttpd, etc. They will be printed about every 1 second.
So what do they mean? They mean what they say - the semaphore is not there, it is easy to check using "ipcs" that semaphores with
those ids do not exist. In fact all the semaphores are missing (the ODB semaphore is eventually recreated, so at least ODB works
correctly).
In this situation, MIDAS will not work correctly.
What is happening?
- cm_connect_experiment1() creates all the semaphores and remembers them in cm_set_experiment_semaphore()
- calls cm_set_client_info()
- cm_set_client_info() finds ODB /expt/sec/password, and returns CM_WRONG_PASSWORD
- before returning, it calls db_close_all_databases() and bm_close_all_buffers(), which delete all semaphores (put a print statement in
ss_semaphore_delete() to see this).
- (values saved by cm_set_experiment_semaphore() are stale now).
- (if by luck you have other midas programs still running, the semaphores will not be deleted)
- we are back to cm_connect_experiment1() which will ask for the password, call cm_set_client_info() again and continue as usual
- it will reopen ODB, recreating the ODB semaphore
- (but all the other semaphores are still deleted and values saved by cm_set_experiment_semaphore() are stale)
I through to improve this by fixing a bug in cm_msg_log() (where the messages are coming from) - it tries to lock the "MSG"
semaphore, but even if it could not lock it, it continues as usual and even calls an unlock at the end. (very bad). For catastrophic
locking failures like this (semaphore is deleted), we usually abort. But if I abort here, I get completely locked out from odb - odbedit
crashes right away and there is no way to do any corrective action other than delete odb and reload it from an xml file.
I know that some experiments use this password protection - why/how does it work there?
I think they are okey because they put critical programs like odbedit, mserver, mlogger and mhttpd into "/expt/sec/allowed
programs". In this case the pass the password check in cm_set_client_info() and the semaphores are not deleted. If any subsequent
program asks for the password, the semaphores survive because mlogger or mhttpd is already running and keeps semaphores from
being deleted.
What a mess.
K.O. |
944
|
17 Dec 2013 |
Stefan Ritt | Info | IEEE Real Time 2014 Call for Abstracts | Hello,
I'm co-organizing the upcoming Real Time Conference, which covers also the field of data acquisition, so it might be interesting for people working
with MIDAS. If you have something to report, you could also consider to send an abstract to this conference. It will be located in Nara, Japan. The conference
site is now open at http://rt2014.rcnp.osaka-u.ac.jp/
Best regards,
Stefan Ritt |
943
|
16 Dec 2013 |
Konstantin Olchanski | Bug Fix | Abolished SYNC and ASYNC defines | A few months ago, definitions of SYNC and ASYNC in midas.h have been changed away from "0" and "1",
and this caused problems with some event buffer management functions bm_xxx().
For example, when event buffers are getting full, bm_send_event(SYNC) unexpectedly started returning
BM_ASYNC_RETURN instead of waiting for free space, causing unexpected crashes of frontend programs.
Part of the problem was confusion between SYNC/ASYNC used by buffer management (bm_xxx) and by run
transition (cm_transition()) functions. Adding to confusion, documentation of bm_send_event() & co used
FALSE/TRUE while most actual calls used SYNC/ASYNC.
To sort this out, an executive decision was made to abolish the SYNC/ASYNC defines:
For buffer management calls bm_send_event(), bm_receive_event(), etc, please use:
SYNC -> BM_WAIT
ASYNC -> BM_NO_WAIT
For run transitions, please use:
SYNC -> TR_SYNC
ASYNC -> TR_ASYNC
MTHREAD -> TR_MTHREAD
DETACH -> TR_DETACH
K.O. |
942
|
16 Dec 2013 |
Konstantin Olchanski | Info | MIDAS on ARM | I added MIDAS Makefile rules for building ARM binaries: "make linuxarm" and "make cleanarm" will create
(and clean) object files, libraries and executables under "linux-arm" using the TI Sitara ARM SDK or the
Yocto SDK ARM cross-compilers (GCC 4.7.x and 4.8.x respectively). (Makefile rules for building PPC
binaries have existed for years).
The hardware we have at TRIUMF are "ARMv7" machines - TI Sitara 335x CPUs (google mityarm) and Altera
Cyclone 5 FPGA ARM (google sockit). (as opposed to the ARMv5 CPU on the RaspberryPi). The software
binary API standard settled by Fedora Linux is "hard float" (as opposed to "soft float" used by older SDKs).
So "ARMv7 hard float" is what we intend to use at TRIUMF, but ARMv5 and soft-float should also work ok,
so please report successes and/or problems to this forum.
K.O. |
941
|
28 Nov 2013 |
Konstantin Olchanski | Info | Audit of fixed size arrays | In one of the experiments, we hit a long time bug in mdump - there was an array of 32 equipments and if
there were more than 32 entries under /equipment, it would overrun and corrupt memory. Somehow this
only showed up after mdump was switched to c++. The solution was to use std::vector instead of fixed
size array.
Just in case, I checked other midas programs for fixed size arrays (other than fixed size strings) and found
none. (in midas.c, there is a fixed size array of TR_FIFO[10], but code inspection shows that it cannot
overrun).
I used this script. It can be modified to also identify any strange sized string arrays.
K.O.
#!/usr/bin/perl -w
while (1) {
my $in = <STDIN>;
last unless $in;
#print $in;
$in =~ s/^\s+//;
next if $in =~ /^char/;
next if $in =~ /^static char/;
my $a = $in =~ /(.*)[(\d+)\]/;
next unless $a;
my $a1 = $1;
my $a2 = $2;
next if $a2 == 0;
next if $a2 == 1;
next if $a2 == 2;
next if $a2 == 3;
#print "[$a] [$a1] [$a2]\n";
print "-> $a1[$a2]\n";
}
# end |
940
|
21 Nov 2013 |
Stefan Ritt | Bug Report | Too many bm_flush_cache() in mfe.c | > And I think that works just fine for frontends directly connected to the shared memory, one call to
> bm_flush_buffer() should be sufficient.
That's correct. What you want is once per second or so for polled events, and once per periodic event (which anyhow will typically come only every 10 seconds or so). If there are 3 calls
per event, this is certainly too much.
> But for remote fronends connected through the mserver, it turns out there is a race condition between
> sending the event data on one tcp connection and sending the bm_flush_cache() rpc request on another
> tcp connection.
>
> ...
>
> One solution to this would be to implement periodic bm_flush_buffer() in the mserver, making all calls to
> bm_flush_buffer() in mfe.c unnecessary (unless it's a direct connection to shared memory).
>
> Another solution could be to send events with a special flag telling the mserver to "flush the buffer right
> away".
That's a very good and useful observation. I never really thought about that.
Looking at your proposed solutions, I prefer the second one. mserver is just an interface for RPC calls, it should not do anything "by itself". This was a strategic decision at the beginning.
So sending a flag to punch through the cache on mserver seems to me has less side effects. Will just break binary compatibility :-)
/Stefan |
939
|
20 Nov 2013 |
Konstantin Olchanski | Bug Report | Too many bm_flush_cache() in mfe.c | I was looking at something in the mserver and noticed that for remote frontends, for every periodic event,
there are about 3 RPC calls to bm_flush_cache().
Sure enough, in mfe.c::send_event(), for every event sent, there are 2 calls to bm_flush_cache() (once for
the buffer we used, second for all buffers). Then, for a good measure, the mfe idle loop calls
bm_flush_cache() for all buffers about once per second (even if no events were generated).
So what is going on here? To allow good performance when processing many small events,
the MIDAS event buffer code (bm_send_event()) buffers small events internally, and only after this internal
buffer is full, the accumulated events are flushed into the shared memory event buffer,
where they become visible to the mlogger, mdump and other consumers.
Because of this internal buffering, infrequent small size periodic events can become
stuck for quite a long time, confusing the user: "my frontend is sending events, how come I do not
see them in mdump?"
To avoid this, mfe.c manually flushes these internal event buffers by calling bm_flush_buffer().
And I think that works just fine for frontends directly connected to the shared memory, one call to
bm_flush_buffer() should be sufficient.
But for remote fronends connected through the mserver, it turns out there is a race condition between
sending the event data on one tcp connection and sending the bm_flush_cache() rpc request on another
tcp connection.
I see that the mserver always reads the rpc connection before the event connection, so bm_flush_cache()
is done *before* the event is written into the buffer by bm_send_event(). So the newly
send event is stuck in the buffer until bm_flush_cache() for the *next* event shows up:
mfe.c: send_event1 -> flush -> ... wait until next event ... -> send_event2 -> flush
mserver: flush -> receive_event1 -> ... wait ... -> flush -> receive_event2 -> ... wait ...
mdump -> ... nothing ... -> ... nothing ... -> event1 -> ... nothing ...
Enter the 2nd call to bm_flush_cache in mfe.c (flush all buffers) - now because mserver seems to be
alternating between reading the rpc connection and the event connection, the race condition looks like
this:
mfe.c: send_event -> flush -> flush
mserver: flush -> receive_event -> flush
mdump: ... -> event -> ...
So in this configuration, everything works correctly, the data is not stuck anywhere - but by accident, and
at the price of an extra rpc call.
But what about the periodic 1/second bm_flush_cache() on all buffers? I think it does not quite work
either because the race condition is still there: we send an event, and the first flush may race it and only
the 2nd flush gets the job done, so the delay between sending the event and seeing it in mdump would be
around 1-2 seconds. (no more than 2 seconds, I think). Since users expect their events to show up "right
away", a 2 second delay is probably not very good.
Because periodic events are usually not high rate, the current situation (4 network transactions to send 1
event - 1x send event, 3x flush buffer) is probably acceptable. But this definitely sets a limit on the
maximum rate to 3x (2x?) the mserver rpc latency - without the rpc calls to bm_flush_buffer() there
would be no limit - the events themselves are sent through a pipelined tcp connection without
handshaking.
One solution to this would be to implement periodic bm_flush_buffer() in the mserver, making all calls to
bm_flush_buffer() in mfe.c unnecessary (unless it's a direct connection to shared memory).
Another solution could be to send events with a special flag telling the mserver to "flush the buffer right
away".
P.S. Look ma!!! A race condition with no threads!!!
K.O. |
|