Back Midas Rome Roody Rootana
  Midas DAQ System, Page 101 of 143  Not logged in ELOG logo
Entry  14 Nov 2013, Konstantin Olchanski, Bug Report, MacOS10.9 strlcpy() problem 
On MacOS 10.9 MIDAS will crashes in strlcpy() somewhere inside odb.c. We think this is because strlcpy() 
in MacOS 10.9 was changed to abort() if input and output strings overlap. For overlapping memory one is 
supposed to use memmove(). This is fixed in current midas, for older versions, you can try this patch:

konstantin-olchanskis-macbook:midas olchansk$ git diff
diff --git a/src/odb.c b/src/odb.c
index 1589dfa..762e2ed 100755
--- a/src/odb.c
+++ b/src/odb.c
@@ -6122,7 +6122,10 @@ INT db_paste(HNDLE hDB, HNDLE hKeyRoot, const char *buffer)
                   pc++;
                while ((*pc == ' ' || *pc == ':') && *pc)
                   pc++;
-               strlcpy(data_str, pc, sizeof(data_str));
+
+               //strlcpy(data_str, pc, sizeof(data_str)); // MacOS 10.9 does not permit strlcpy() of overlapping 
strings
+               assert(strlen(pc) < sizeof(data_str)); // "pc" points at a substring inside "data_str"
+               memmove(data_str, pc, strlen(pc)+1);
 
                if (n_data > 1) {
                   data_str[0] = 0;
konstantin-olchanskis-macbook:midas olchansk$ 


As historical reference:

a) MacOS documentation says "behavior is undefined", which is no longer true, the behaviour is KABOOM!
https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man3/strlcpy.3.h
tml

b) the original strlcpy paper from OpenBSD does not contain the word "overlap" 
http://www.courtesan.com/todd/papers/strlcpy.html

c) the OpenBSD man page says the same as Apple man page (behaviour undefined)
http://www.openbsd.org/cgi-bin/man.cgi?query=strlcpy

d) the linux kernel strlcpy() uses memcpy() and is probably unsafe for overlapping strings
http://lxr.free-electrons.com/source/lib/string.c#L149

e) midas strlcpy() looks to be safe for overlapping strings.

K.O.
Entry  15 Nov 2013, Konstantin Olchanski, Bug Report, stuck data buffers 
We have seen several times a problem with stuck data buffers. The symptoms are very confusing - 
frontends cannot start, instead hang forever in a state very hard to kill. Also "mdump -s -d -z 
BUF03" for the affected data buffers is stuck.

We have identified the source of this problem - the semaphore for the buffer is locked and nobody 
will ever unlock it - MIDAS relies on a feature of SYSV semaphores where they are automatically 
unlocked by the OS and cannot ever be stuck ever. (see man semop, SEM_UNDO function).

I think this SEM_UNDO function is broken in recent Linux kernels and sometimes the semaphore 
remains locked after the process that locked it has died. MIDAS is not programmed to deal with this 
situation and the stuck semaphore has to be cleared manually.

Here, "BUF3" is used as example, but we have seen "SYSTEM" and ODB with stuck semaphores, too.

Steps:
a) confirm that we are using SYSV semaphores: "ipcs" should show many semaphores
b) identify the stuck semaphore: "strace mdump -s -d -z BUF03".
c) here will be a large printout, but ultimately you will see repeated entries of 
"semtimedop(9633800, {{0, -1, SEM_UNDO}}, 1, {1, 0}^C <unfinished ...>"
d) erase the stuck semaphore "ipcrm -s 9633800", where the number comes from semtimedop() in 
the strace output.
e) try again: "mdump -s -d -z BUF03" should work now.

Ultimately, I think we should switch to POSIX semaphores - they are easier to manage (the strace 
and ipcrm dance becomes "rm /dev/shm/deap_BUF03.sem" - but they do not have the SEM_UNDO 
function, so detection of locked and stuck semaphores will have to be done by MIDAS. (Unless we 
can find some library of semaphore functions that already provides such advanced functionality).

K.O.
Entry  20 Nov 2013, Konstantin Olchanski, Bug Report, Too many bm_flush_cache() in mfe.c 
I was looking at something in the mserver and noticed that for remote frontends, for every periodic event, 
there are about 3 RPC calls to bm_flush_cache().

Sure enough, in mfe.c::send_event(), for every event sent, there are 2 calls to bm_flush_cache() (once for 
the buffer we used, second for all buffers). Then, for a good measure, the mfe idle loop calls 
bm_flush_cache() for all buffers about once per second (even if no events were generated).

So what is going on here? To allow good performance when processing many small events,
the MIDAS event buffer code (bm_send_event()) buffers small events internally, and only after this internal
buffer is full, the accumulated events are flushed into the shared memory event buffer,
where they become visible to the mlogger, mdump and other consumers.

Because of this internal buffering, infrequent small size periodic events can become
stuck for quite a long time, confusing the user: "my frontend is sending events, how come I do not
see them in mdump?"

To avoid this, mfe.c manually flushes these internal event buffers by calling bm_flush_buffer().

And I think that works just fine for frontends directly connected to the shared memory, one call to 
bm_flush_buffer() should be sufficient.

But for remote fronends connected through the mserver, it turns out there is a race condition between 
sending the event data on one tcp connection and sending the bm_flush_cache() rpc request on another 
tcp connection.

I see that the mserver always reads the rpc connection before the event connection, so bm_flush_cache() 
is done *before* the event is written into the buffer by bm_send_event(). So the newly
send event is stuck in the buffer until bm_flush_cache() for the *next* event shows up:

mfe.c: send_event1 -> flush -> ... wait until next event ... -> send_event2 -> flush
mserver: flush -> receive_event1 -> ... wait ... -> flush -> receive_event2 -> ... wait ...
mdump -> ... nothing ... -> ... nothing ... -> event1 -> ... nothing ...

Enter the 2nd call to bm_flush_cache in mfe.c (flush all buffers) - now because mserver seems to be 
alternating between reading the rpc connection and the event connection, the race condition looks like 
this:

mfe.c: send_event -> flush -> flush
mserver: flush -> receive_event -> flush
mdump: ... -> event -> ...

So in this configuration, everything works correctly, the data is not stuck anywhere - but by accident, and 
at the price of an extra rpc call.

But what about the periodic 1/second bm_flush_cache() on all buffers? I think it does not quite work
either because the race condition is still there: we send an event, and the first flush may race it and only 
the 2nd flush gets the job done, so the delay between sending the event and seeing it in mdump would be 
around 1-2 seconds. (no more than 2 seconds, I think). Since users expect their events to show up "right
away", a 2 second delay is probably not very good.

Because periodic events are usually not high rate, the current situation (4 network transactions to send 1 
event - 1x send event, 3x flush buffer) is probably acceptable. But this definitely sets a limit on the 
maximum rate to 3x (2x?) the mserver rpc latency - without the rpc calls to bm_flush_buffer() there
would be no limit - the events themselves are sent through a pipelined tcp connection without 
handshaking.

One solution to this would be to implement periodic bm_flush_buffer() in the mserver, making all calls to 
bm_flush_buffer() in mfe.c unnecessary (unless it's a direct connection to shared memory).

Another solution could be to send events with a special flag telling the mserver to "flush the buffer right 
away".

P.S. Look ma!!! A race condition with no threads!!!

K.O.
    Reply  21 Nov 2013, Stefan Ritt, Bug Report, Too many bm_flush_cache() in mfe.c 
> And I think that works just fine for frontends directly connected to the shared memory, one call to 
> bm_flush_buffer() should be sufficient.

That's correct. What you want is once per second or so for polled events, and once per periodic event (which anyhow will typically come only every 10 seconds or so). If there are 3 calls 
per event, this is certainly too much.


> But for remote fronends connected through the mserver, it turns out there is a race condition between 
> sending the event data on one tcp connection and sending the bm_flush_cache() rpc request on another 
> tcp connection.
> 
> ...
> 
> One solution to this would be to implement periodic bm_flush_buffer() in the mserver, making all calls to 
> bm_flush_buffer() in mfe.c unnecessary (unless it's a direct connection to shared memory).
> 
> Another solution could be to send events with a special flag telling the mserver to "flush the buffer right 
> away".

That's a very good and useful observation. I never really thought about that. 

Looking at your proposed solutions, I prefer the second one. mserver is just an interface for RPC calls, it should not do anything "by itself". This was a strategic decision at the beginning. 
So sending a flag to punch through the cache on mserver seems to me has less side effects. Will just break binary compatibility :-)

/Stefan
Entry  15 Jan 2014, Konstantin Olchanski, Bug Report, MIDAS password protection is broken 
If you follow the MIDAS documentation for setting up password protection, you will get strange messages:

ladd00:midas$ ./linux/bin/odbedit
[local:testexpt:S]/>passwd                <---- setup a password
Password: 
Retype password: 
[local:testexpt:S]/> exit

ladd00:midas$ odbedit
Password:    <---- enter correct password here
ss_semaphore_wait_for: semop/semtimedop(21135376) returned -1, errno 22 (Invalid argument)
ss_semaphore_release: semop/semtimedop(21135376) returned -1, errno 22 (Invalid argument)
[local:testexpt:S]/>ss_semaphore_wait_for: semop/semtimedop(21037069) returned -1, errno 43 (Identifier removed)

The same messages will appear from all other programs - mhttpd, etc. They will be printed about every 1 second.

So what do they mean? They mean what they say - the semaphore is not there, it is easy to check using "ipcs" that semaphores with 
those ids do not exist. In fact all the semaphores are missing (the ODB semaphore is eventually recreated, so at least ODB works 
correctly).

In this situation, MIDAS will not work correctly.

What is happening?

- cm_connect_experiment1() creates all the semaphores and remembers them in cm_set_experiment_semaphore()
- calls cm_set_client_info()
- cm_set_client_info() finds ODB /expt/sec/password, and returns CM_WRONG_PASSWORD
- before returning, it calls db_close_all_databases() and bm_close_all_buffers(), which delete all semaphores (put a print statement in 
ss_semaphore_delete() to see this).
- (values saved by cm_set_experiment_semaphore() are stale now).
- (if by luck you have other midas programs still running, the semaphores will not be deleted)
- we are back to cm_connect_experiment1() which will ask for the password, call cm_set_client_info() again and continue as usual
- it will reopen ODB, recreating the ODB semaphore
- (but all the other semaphores are still deleted and values saved by cm_set_experiment_semaphore() are stale)

I through to improve this by fixing a bug in cm_msg_log() (where the messages are coming from) - it tries to lock the "MSG" 
semaphore, but even if it could not lock it, it continues as usual and even calls an unlock at the end. (very bad). For catastrophic 
locking failures like this (semaphore is deleted), we usually abort. But if I abort here, I get completely locked out from odb - odbedit 
crashes right away and there is no way to do any corrective action other than delete odb and reload it from an xml file.

I know that some experiments use this password protection - why/how does it work there?

I think they are okey because they put critical programs like odbedit, mserver, mlogger and mhttpd into "/expt/sec/allowed 
programs". In this case the pass the password check in cm_set_client_info() and the semaphores are not deleted. If any subsequent 
program asks for the password, the semaphores survive because mlogger or mhttpd is already running and keeps semaphores from 
being deleted.

What a mess.

K.O.
Entry  15 Jan 2014, Konstantin Olchanski, Bug Report, MIDAS Web password broken 
The MIDAS Web password function is broken - with the web password enabled, I am not prompted for a 
password when editing ODB. The password still partially works - I am prompted for the web password 
when starting a run. K.O.

P.S. https://midas.triumf.ca/MidasWiki/index.php/Security says "web password" needed for "write access", 
but does not specify if this includes editing odb. (I would think so, and I think I remember that it used to).
    Reply  15 Jan 2014, Konstantin Olchanski, Bug Report, MIDAS password protection is broken 
> I through to improve this by fixing a bug in cm_msg_log() (where the messages are coming from)

The periodic messages about broken semaphore actually come from al_check(). I put some whining there, too.

K.O.
    Reply  05 Feb 2014, Stefan Ritt, Bug Report, MIDAS Web password broken 
> The MIDAS Web password function is broken - with the web password enabled, I am not prompted for a 
> password when editing ODB. The password still partially works - I am prompted for the web password 
> when starting a run. K.O.
> 
> P.S. https://midas.triumf.ca/MidasWiki/index.php/Security says "web password" needed for "write access", 
> but does not specify if this includes editing odb. (I would think so, and I think I remember that it used to).

Didn't we agree to put those issues into the bitbucket issue tracker?

This functionality got broken when implementing the new inline edit functionality. Actually one has to "manually" check for the password. The old way 
was that there web page asking for the web password, but if we do ODBSet via Ajax there is nobody who could fill out that form. So I added a 
"manual" checking into ODBCheckWebPassword(). This will not work for custom pages, but they have their own way to define passwords.

/Stefan
    Reply  05 Feb 2014, Stefan Ritt, Bug Report, MIDAS password protection is broken 
> If you follow the MIDAS documentation for setting up password protection, you will get strange messages:

This is interesting. When I used it last time (some years ago...) it worked fine. I did not touch this, and now it's broken. Must be related to some modifications of the semaphore system. 
Well, anyhow, the problem seems to me the db_close_all_databses() and the re-opening of the ODB. Apparently the db_close_database() call does not clean up the semaphores properly. 
Actually there is absolutely no need to close and re-open the ODB upon a wrong password, so I just removed that code and now it works again.

/Stefan
Entry  11 Feb 2014, Andreas Suter, Bug Report, mhttpd, etc. arrayIndex_b.jpegmultiDriverSet.jpegmultiDriverSet_labels.jpegheader1.jpeg
I found a couple of bugs in the current mhttpd, midas version: "93fa5ed"

This concerns all browser I checked (firefox, chrome, internet explorer, opera)

1) When trying to change a value of a frontend using a multi class driver (we
have a lot of them), the field for changing appears, but I cannot get it set!
Neither via the two set buttons (why 2?) nor via return.


It also would be nice, if the css could be changed such that input/output for
multi-driver would be better separated; something along as suggested in


2) If I changing a value (generic/hv class driver), the index of the array
remains when chaning a value until the next update of the page


3) We are using a web-password. In the current version the password is plain visible when entering.

4) I just copied the header as described here: https://midas.triumf.ca/elog/Midas/908, but I get another result:

It looks like as a wrong cookie is filtered?
    Reply  11 Feb 2014, Stefan Ritt, Bug Report, mhttpd, etc. 

Andreas Suter wrote:
I found a couple of bugs in the current mhttpd, midas version: "93fa5ed"


See my reply on the issue tracker:

https://bitbucket.org/tmidas/midas/issue/18/mhttpd-bugs
Entry  23 Feb 2014, Andre Frankenthal, Bug Report, Installation failing on Mac OS X 10.9 -- related to strlcat and strlcpy 
Hi,

I don't know if this actually fits the Bug Report category. I've been trying to install Midas on my Mac OS 
Mavericks and I keep getting errors like "conflicting types for '___builtin____strlcpy_chk' ..." and similarly for 
strlcat. I googled a bit and I think the problem might be that in Mavericks strlcat and strlcpy are already 
defined in string.h, and so there might be a redundant definition somewhere. I'm not sure what the best 
way to fix this would be though. Any help would be appreciated.

Thanks,

Andre
    Reply  27 Feb 2014, Konstantin Olchanski, Bug Report, Installation failing on Mac OS X 10.9 -- related to strlcat and strlcpy 
> 
> I don't know if this actually fits the Bug Report category. I've been trying to install Midas on my Mac OS 
> Mavericks and I keep getting errors like "conflicting types for '___builtin____strlcpy_chk' ..." and similarly for 
> strlcat. I googled a bit and I think the problem might be that in Mavericks strlcat and strlcpy are already 
> defined in string.h, and so there might be a redundant definition somewhere. I'm not sure what the best 
> way to fix this would be though. Any help would be appreciated.
> 

We have run into this problem - MacOS 10.9 plays funny games with definitions of strlcpy() & co - and it has been fixed since last Summer.

For the record, current MIDAS builds just fine on MacOS 10.9.2.

For a pure test, try the instructions posted at midas.triumf.ca:

cd $HOME
mkdir packages
cd packages
git clone https://bitbucket.org/tmidas/midas
git clone https://bitbucket.org/tmidas/mscb
git clone https://bitbucket.org/tmidas/mxml
cd midas
make

K.O.
    Reply  27 Feb 2014, Andre Frankenthal, Bug Report, Installation failing on Mac OS X 10.9 -- related to strlcat and strlcpy 
> > 
> > I don't know if this actually fits the Bug Report category. I've been trying to install Midas on my Mac OS 
> > Mavericks and I keep getting errors like "conflicting types for '___builtin____strlcpy_chk' ..." and similarly for 
> > strlcat. I googled a bit and I think the problem might be that in Mavericks strlcat and strlcpy are already 
> > defined in string.h, and so there might be a redundant definition somewhere. I'm not sure what the best 
> > way to fix this would be though. Any help would be appreciated.
> > 
> 
> We have run into this problem - MacOS 10.9 plays funny games with definitions of strlcpy() & co - and it has been fixed since last Summer.
> 
> For the record, current MIDAS builds just fine on MacOS 10.9.2.
> 
> For a pure test, try the instructions posted at midas.triumf.ca:
> 
> cd $HOME
> mkdir packages
> cd packages
> git clone https://bitbucket.org/tmidas/midas
> git clone https://bitbucket.org/tmidas/mscb
> git clone https://bitbucket.org/tmidas/mxml
> cd midas
> make
> 
> K.O.

Thanks, it works like a charm now! I must have obtained an outdated version of Midas.

Andre
Entry  07 Jul 2014, Ryu Sawada, Bug Report, mhist does not show history when -s option is used 
When I use -s option of mhist, it does not show history, for example.
mhist -s 140705 -p 140707 -e "HV".

And if I remove a line like,
diff --git a/utils/mhist.cxx b/utils/mhist.cxx
index 930de3b..10cc6ad 100755
--- a/utils/mhist.cxx
+++ b/utils/mhist.cxx
@@ -652,7 +652,6 @@ int main(int argc, char *argv[])
             else if (strncmp(argv[i], "-s", 2) == 0) {
                strcpy(start_name, argv[++i]);
                start_time = convert_time(argv[i]);
-               do_hst_file = true;
             } else if (strncmp(argv[i], "-p", 2) == 0)
                end_time = convert_time(argv[++i]);
             else if (strncmp(argv[i], "-t", 2) == 0)

It works.

Ryu Sawada
Entry  14 Oct 2014, Konstantin Olchanski, Bug Report, Problem in mfe multithread equipments 
In the ALPHA experiment at CERN I found a problem in mfe.c handling of multithreaded equipments. This problem was in 
some forms introduced around May 2013 and around Aug 2013 (commit 
https://bitbucket.org/tmidas/midas/src/45984c35b4f7/src/mfe.c) (I hope I got it right).

The effect was very odd - if event rate of multithreaded equipment was more than 100 Hz, the event counters on the midas 
status page would not increment and the frontend will crash on end of run. Other than that, all the events from the 
multithreaded equipment seem to appear in the SYSTEM buffer and in the data file normally.

This happened: in mfe.c::receive_trigger_event() a loop was introduced (previously,
there was no loop there - there was and still is a loop outside of receive_trigger_event()):

while (1)
   wait 10 ms for an event
   process event, loop back
   if there is no event, exit
}

Obviously, if the event rate is more than 100 Hz (repetition rate less than 10 ms),
the 10 ms wait will always return an event and we will never exit this loop.

So the mfe.c main loop is now stuck here and will not process any periodic activity
such as updating the equipment statistics (event counters on the midas status page)
or running periodic equipments in the same front end program.

The crash at the end of run will be caused by a timeout in responding to the "end of run" RPC call.

I have a patch in testing that solves this problem by restoring receive_trigger_event() to the original configuration, i.e. 
https://bitbucket.org/tmidas/midas/src/6899b96a4f8177d4af92035cd84aadf5a7cbc875/src/mfe.c?at=develop

K.O.
Entry  14 Oct 2014, Konstantin Olchanski, Bug Report, Hostile network scans against MIDAS RPC ports 
At CERN I see a large number of hostile network scans that seem to be injecting HTTP requests into the 
MIDAS RPC ports. So far, all these requests seem to be successfully rejected without crashing anything, but 
they do clog up midas.log.

The main problem here is that all MIDAS programs have at least one TCP socket open where they listen for 
RPC commands, such as "start of run", "please shutdown", etc. The port numbers of these sockets are 
randomized and that makes them difficult to protect them with firewall rules (firewall rules like fixed port 
numbers).

Note that this is different from the hostile network scans that I have first seen maybe 5 years ago that 
affected the mserver main listener socket. Then, as a solution, I hardened the RPC receiver code against 
bad data (and happy to see that this hardening is still holding up) and implemented the mserver "-A" 
command switch to specify a list of permitted peers. Also mserver uses a fixed port number ("-p" switch) 
and is easy to protect with firewall rules.

Since these ports cannot be protected by OS means (firewall, etc), we have to protect them in MIDAS.

One solution is to reject all connections from unauthorized peers.

One way to use this is to implement the "-A" switch to explicitely list all permitted peers, these switch will 
ave to be added to all long running midas programs (mhttpd, mlogger, mfe.c, etc). Not very practical, IMO.

Another way is to read the list of permitted peers from ODB, at startup time, or each time a new connection 
is made.

In the latter case, care needs to be taken to avoid deadlocks. For example remote programs that read ODB 
through the mserver may deadlock if the same mserver is the one trying to establish the RPC connection. 
Or if ODB is somehow locked.

NB - we already keep a list of permitted peers in ODB /Experiment/Security.

K.O.
    Reply  14 Oct 2014, Konstantin Olchanski, Bug Report, Problem in mfe multithread equipments 
For my reference:
good version: https://bitbucket.org/tmidas/midas/src/6899b96a4f8177d4af92035cd84aadf5a7cbc875/src/mfe.c?at=develop
first breakage: https://bitbucket.org/tmidas/midas/src/c60259d9a244bdcd296a8c5c6ab0b91de27f9905/src/mfe.c?at=develop
second breakage: https://bitbucket.org/tmidas/midas/src/45984c35b4f7257f90515f29116dec6fb46f2ebc/src/mfe.c?at=develop

The "first breakage" may actually be okey, because there the badnik loop loops over ring buffers, not infinite. But I cannot test it anymore.
K.O.
Entry  14 Oct 2014, Konstantin Olchanski, Bug Report, Problem with EQ_USER 
If you use EQ_USER in mfe.c and have multiple threads writing into the ring buffer, you will have a big 
problem - the thread locking in the ring buffer code only works for a single writer thread and a single 
reader thread.

Presently, it is not clear how to have multiple multithreaded equipments inside one frontend.

During the Summer of 2013 code briefly existed in mfe.c to have an array of ring buffers and each 
multithreaded equipment could write into it's own buffer.

But this code is now removed and mfe.c can only read from a single ring buffer and as I noted above, ring 
buffer locking requires that only a single thread writes into it.
K.O.
    Reply  14 Oct 2014, Stefan Ritt, Bug Report, Hostile network scans against MIDAS RPC ports 
Doing this through the ODB seems ok to me. If the ODB cannot be accessed, you can fall back to no protection.

At PSI we fortunately do not have these network scans because PSI uses a institute-wide firewall. So you can connect from outside PSI to inside PSI only 
on certain well-defined ports (like SSH to certain machines). You can do the same in Alpha. Use one computer as a router with two network cards, where 
the DAQ network runs on the second card as a private network. Then program the routing tables in that gateway such that only certain ports can be 
accessed from outside, like port 8080 to mhttpd. This way you block all except the things which are needed.

/Stefan
ELOG V3.1.4-2e1708b5