I confirm, there is no problem in single-threaded programs, and
there is no problem if all bm_send_event() and bm_flush_cache() are called
from the same thread.
> ... instead of struggling with all your locks.
it is better to have midas fully thread safe. ODB has been so for a long time,
event buffer partially (except for this bug), now fully.
without that the problem still exists, because in many frontends,
bm_flush_buffer() is called from the main thread, and will race
against the "bm_send_event() thread". Of course you can do
everything on the main thread, but this opens you to RPC timeouts
during run transitions (if you sleep in bm_wait_for_free_space()).
also the SYSMSG buffer is subject to the same bug. cm_msg() is of course
safe to call from anywhere, but cm_msg_flush_buffer() and cm_periodic_tasks()
can be called from any thread, and they issue bm_send_event(SYSMSG),
and there will be mysterious crashes and SYSMSG corruptions, probably
only during message storms, but still!
the mhttpd bug should be fixed now (branch feature/buffer_mutex).
simplest way to reproduce:
quickly ctrl-C it
inside mhttpd (by hook or crook) observe that the second wget got the data meant for the first wget.
if you cannot ctrl-C the first wget quickly enough, put a sleep somewhere in the worker thread (in
mongoose_write(), I think).
this is what happens.
1st wget stops (by ctrl-C), socket is closed, mongoose frees it's mg_connection object
(corresponding worker is still labouring, hmm... actually sleeping, and now has a stale nc pointer)
2nd wget starts, new socket is opened, mongoose allocates a new mg_connection object,
but malloc() gives it back the same memory we just freed(), and the 1st wget's worker thread
nc pointer is no longer stale, but points to 2nd wget's connection.
so we think we are clever and we check the socket file descriptors. but same thing
happens there, too. if 1st wget was file descriptor 7, it is closed, (1st wget worker now has
a stale file handle), then reopened for the 2nd wget, per POSIX, we get back the same
file descriptor 7. 1st wget worker now has the file handle for the 2nd wget tcp socket and
the famous test/crash for "sending data to wrong socket" is defeated.
now, worker thread for the 1st wget wants to send a reply, it has a valid nc pointer (points to 2nd wget's
mg_connection object) and a valid file descriptor (points to 2nd wget's tcp socket),
reply meant for the 1st wget is successfully sent to the 2nd wget, 2nd wget finishes, it's socket
is closed, mg_connection object is free'ed. Now the worker thread for the 2nd wget has stale
connection info, but this is okey, mongoose does not find a matching connection, 2nd wget
worked thread reply goes nowhere, thread finishes silently (no memory leaks here, I checked).
so, connection for 2nd wget completely impersonates the closed connection of 1st wget (I guess I could
check the full socket address info, remote ip address, remote port number, etc, but...)
in practice, this bug does not happen often because modern browsers tend to keep tcp sockets open
for very long time. (not sure about sundry web proxies, etc).
solution of course is very simple. match worker thread data to mongoose mg_connection objects
using our own connection sequential number, which are unique and very easy to keep track
of through the mongoose event handler. all this mess runs in the main thread,
so no locking trouble here, small blessing.
Trying to play with midas file but I get error:
[Analyzer,ERROR] [odb.cxx:845:db_validate_name,ERROR] Invalid name "/Analyzer/Tests/low_sum/Rate [Hz]" passed to db_create_key_wlocked: should not contain "["
I'm not sure what sets the name so I'm not sure how to fix this.
> 1st wget stops (by ctrl-C), socket is closed, mongoose frees it's mg_connection object
> (corresponding worker is still labouring, hmm... actually sleeping, and now has a stale nc pointer)
> 2nd wget starts, new socket is opened, mongoose allocates a new mg_connection object,
> but malloc() gives it back the same memory we just freed(), and the 1st wget's worker thread
> nc pointer is no longer stale, but points to 2nd wget's connection.
Why don't we CLEAR the memory (memset(object,0,sizeof(object)) before the free(), this way it cannot be
mistakenly re-used by the next thread.
> > ... instead of struggling with all your locks.
> it is better to have midas fully thread safe. ODB has been so for a long time,
> event buffer partially (except for this bug), now fully.
> without that the problem still exists, because in many frontends,
> bm_flush_buffer() is called from the main thread, and will race
> against the "bm_send_event() thread". Of course you can do
> everything on the main thread, but this opens you to RPC timeouts
> during run transitions (if you sleep in bm_wait_for_free_space()).
Just for the record: in the mfe.cxx framework both bm_send_event() and
bm_flush_buffer() are called from the main thread, as can be seen in the
But I agree that having all buffer operations thread safe is a clear benefit.
> > 1st wget stops (by ctrl-C), socket is closed, mongoose frees it's mg_connection object
> > (corresponding worker is still labouring, hmm... actually sleeping, and now has a stale nc pointer)
> > 2nd wget starts, new socket is opened, mongoose allocates a new mg_connection object,
> > but malloc() gives it back the same memory we just freed(), and the 1st wget's worker thread
> > nc pointer is no longer stale, but points to 2nd wget's connection.
> Why don't we CLEAR the memory (memset(object,0,sizeof(object)) before the free(), this way it cannot be
> mistakenly re-used by the next thread.
My description was unclear. I will try better now.
When http replies are generated by worker threads, matching of reply to mg_connection is done
by checking the address of the mg_connection object. (mongoose itself unhelpfully offers
to send the reply to every mg_connection, see the responder to mg_broadcast() messages).
This works for open/active connections, addresses of all mg_connections are unique.
But if connection is closed and a new connection is opened, the address is reused (by malloc()/free()
reusing memory blocks or by mongoose using a pool of mg_connection objects, does not matter).
So matching http reply to mg_connection using only address of mg_connection can match the wrong connection.
(contents of mg_connection object does not matter, only address is used by matching. so memzero() of
mg_connection object does not help).
I saw this during my testing - wrong data was sent to wrong browser often enough - but did
not understand that the above problem is happening.
Because I was unable to reliably reproduce the problem, I could not debug it. I tried to add
a check for the tcp socket file descriptor number, in case there is a straight bug or multithread race
or simple memory corruption. This replaced "we sent wrong data to wrong browser, poisoned browser
cache, confused the user" with a crash. This "fix" seemed effective at the time.
Maybe I should mention browser cache poisoning again. What happened is html pages and rpc replies
were returned as responses to load things like CSS files, these bad responses are cached by the browser
pretty much forever, so all subsequent midas pages will look wrong (bad css!) forever, until
user manually clears browser cache. reload of page did not help, restart of browser did not help (I think).
So a very bad bug.
Unfortunately, the check for file descriptor was not effective because file descriptors are also
reused. And I did see wrong data returned by mhttpd, but even more rarely. And everybody (myself
included) complained about mhttpd crashes.
Now, matching of responses to connections is done by connection sequential/serial number,
which is unique 32-bit counter. Mismatch of reply to connection should not happen again.
P.S. Latest version of the mongoose web server library does not help with this problem,
the example code for matching reply to connection in their multithread example looks bogus:
> Thanks for the investigation. Back in 2020, we had some issues
> of losing data between the system buffer and the logger writing them
> to disk (https://daq00.triumf.ca/elog-midas/Midas/1966). This was polled equipment
> but we had a multithreaded FE running at the same time. Could this be related to the same problem?
I think we will have to follow up on your problem 1966 separately.
I think this bug cannot lose events. Writing events to the write cache has correct
locking, no loss here. writing write cache to shared memory has correct locking,
no loss there. the bug will cause the *next* event in the event buffer to be overwritten,
this will be detected by most programs as shared memory corruption and everybody
will quit. (mhttpd, mserver, odbedit will probably survive).
I guess there could be unlucky corruption that looks like nothing was corrupted,
but this will affect only a few events right at the shared memory read/write
pointer, it so happens that they are the oldest events in the buffer and likely
mlogger already wrote them to disk. mlogger read pointer will likely follow
the shared memory write pointer closely, well ahead of the shared memory
read pointer which always pointe to the older event and where this bug's corruption
So no, I do not think this bug can cause event loss between frontend and mlogger.
> > It would be good to pin point there the data is lost. This is the sequence:
> > frontend user code -> mfe.c code -> SYSTEM buffer -> mlogger -> disk
> > To see if correct data arrives to the SYSTEM buffer, run:
> > mdump -z SYSTEM
> > To see if mlogger is receiving events from the SYSTEM buffer, run:
> > mlogger -v ### mlogger should report all events, history and data
> > To see if mlogger writes events to disk, examine the disk file (in this case, you already did, data is not there).
> > I would guess that your data does not make it out from the frontend (mdump shows "nothing"),
> > if data were to arrive into the SYSTEM buffer, it would make it to disk, unless
> > mlogger is misconfigured (but you already checked that).
> > If you have trouble with the frontend framework code, you can try to switch from the mfe.c frontend
> > to the newer c++ tmfe frontend (see progs/fetest_tmfe.cxx and progs/fetest_tmfe_thread.cxx).
> > K.O.
> Good evening
> I tried to reproduce the behavior in a very simple FE but it did not work out.
> The next thing for me would be to take the FE that is producing this behavior,
> replace all the device communication and data with dummies. If the problem is still
> there I would start to simplify as much as possible.
> Following the inputs of KO, I pin-pointed the data loss. The system buffer still
> gets the data but the mlogger does not write the data event. Then of course the data
> is also not anymore present in the data file. Therefore, I checked the logger
> settings again, Event ID and Trigger Mask still -1. Nothing else, at least from my point of view,
> that is misconfigured. Nevertheless, if it helps I can send my ODB settings.
> When doing the tests just before I found something else that probably
> can give a hint to the problem. The data is only lost if the time between
> two runs is long (a few seconds). As an example: If I run a sequence with a loop
> and after the FE stops the run the loop ends and the next run is started automatically,
> then only the first run has no data, which is the one after a longer time of
> no data taking. When I add a "WAIT Seconds 5" after the run before starting
> the next, not data is written to the disk for any run. I also found this
> once when adding a sleep(1) at the end of the FE readout function
> but back then did not think about it any further.
Looks like this problem fell into the covid crack.
As far as I know, MIDAS does not lose any events between bm_send_event() and the shared memory
buffer. It does not lose any events in the mlogger (unless the "event request" is misconfigured).
(there is lots of opportunity to lose events in complicated frontends).
If you have some evidence otherwise, I would very much like to hear about
it and I want to fix all problems that cause it.
In your previous report I was under the impression that you lose random events here and there,
but your latest report is about mlogger not writing anything at all.
Which case is it?
If you can definitely say that all your events make it to the SYSTEM buffer
but mlogger sometimes does not see some of them and sometimes does not see all of them,
we should look very closely at bm_receive_event() and mlogger itself.
In the case where mlogger is not seeing any events at all (output file is empty), as this is
happening, I would like to see the output of mdump (to confirm events are written to SYSTEM
buffer with correct event_id and trigger_mask) and the output of (say)
"manalyzer_test.exe --dump run01161.mid.lz4" on your output file.
If the output is very long, you can email it to me directly instead of posting it here.
I see, now I understand.
As for the browser cache problem: This Chrome extension is your friend:
I use it all the time I change the CSS or a JS file. Having the "Developer Tools" open in Chrome helps as well
(cache is then turned off). Firefox has similar extensions.
One idea: we should have a look at mlogger::close_channels(). There the SYSTEM buffer is emptied through the cm_yield() call. Instrumenting this with some debugging code will enlighten us.
Another possible problem: If the frontend requested to be notified for a run stop AFTER the logger, then the problem might happen: Logger closes file, and THEN the frontend flushes events ending up in the SYSTEM buffer and being logged at the beginning of the next run. The mfe.cxx framework takes care of this by calling
while the mlogger does
cm_register_transition(TR_STOP, tr_stop, 800);
and since 800 > 500 the logger will be called AFTER the frontend. If one use a framework different from mfe.cxx, this could however be different.
> One idea: we should have a look at mlogger::close_channels().
> There the SYSTEM buffer is emptied through the cm_yield() call.
> Instrumenting this with some debugging code will enlighten us.
right. this will "last few events are lost at the end of run".
but that code in the mlogger was not touched in years, if there is a problem there,
we would have seen it by now, most experiments check that the number
of events in the data file is same as number of triggers generated, both
numbers are shown on the midas status page.
> Another possible problem: If the frontend requested to be notified for a run stop AFTER the logger, then the problem might happen: Logger closes file, and THEN the frontend flushes events ending up in the SYSTEM buffer and being logged at the beginning of the next run. The mfe.cxx framework takes care of this by calling
> cm_register_transition(TR_STOP, 500);
default sequence, both mfe.c frontend and c++ tmfe frontend:
start of run:
- mlogger first (configure history, open data file)
- frontends last
- (if any frontend fails, TR_STARTABORT is sent to mlogger to close the output file and "undo" the run start)
end of run:
- frontends first (must not send any events after after processing the TR_STOP RPC call, inside the TR_STOP handler, bm_flush_cache() takes care of the write cache)
- mlogger last
- (if any frontend fails, failure is ignored, run stops regardless)
wrong order will be only if they manually change it, and whatever order
they set, you see it on the midas transition page (and mtransition -v and odbedit stop now -v, etc).
> As for the browser cache problem: This Chrome extension is your friend ...
the reload button becomes a left-click menu, one left-click option is "clear cache and reload".
(there is no button for "clear cookies and reload", re recent elog cookie problem).
but this does not help me personally any. if midas web pages get confused, I will also get confused, too,
and I will spend hours debugging mhttpd before thinking "hmm... maybe I should clear the browser cache!"
not sure about firefox, safari, microsoft edge and opera. if I ever need it, I google it.
I finally found the problem why the readout stops after a run transition.
In my dummy frontend the serial number was not reset to zero at run start.
This leads to a mismatch of the serial number in the function receive_trigger_event of mfe.cxx:1247.
Which is than resulting in the problem that the function founds never a new event in all ring buffers and nothing get read out of the buffer.
Nevertheless, it would be nice that the system would tell the user that there is a mismatch in the serial number (printing a warning / error etc.).
I have a question for anyone experienced with simple CAMAC systems.
My understanding is that for a single ADC system you can use a gate to generate a
LAM signal for triggering on ADC.
The driver that I have "mcstd_libgpmc_camac" has LAM "not implemented" though,
so I'm not sure how I should trigger DAQ. The frontend code that I have seems to use a TDC
as trigger for ADC via "EQ_POLLED" type equipment setting. Should I simply plug in TDC in my
system and use this as trigger? Is it as simple as TDC generates signal via gate and ADC performs job?
Sorry if question is super basic, just confused how to trigger without LAM signal.
Thank you :)
UNBC Grad Physics
I converted mdump file i/o from older mdsupport library to newer midasio library
and it can now read .mid, .mid.gz, .mid.lz4 and .mid.bz2 files. Output should be
identical to what it printed before, if you see any differences, please report
them here or on bitbucket. K.O.
made on 2022-02-21 fixed a serious bug in ODB.
a multithread race condition against an incorrectly updated shared variable caused removal of
random clients from ODB with error message:
My client index %d in ODB is invalid: out of range 0..%d. Maybe this client was removed by a
timeout, see midas.log. Cannot continue, aborting...
the race is between db_open_database() in one program (executed when any midas program starts) and
db_get_my_client_locked() in all running midas programs.
as long as no midas programs are started (db_open_database() is not executed), this bug does not
if i.e. odbedit is executed very often, i.e. from a script, probability of hitting this bug becomes
while debugging something else, I ran into a bit of trouble in mlogger.
I set the mlogger event limit to 100, and after reaching 100 events, mlogger
sayd "stopping run", but nothing happened, run kept going.
it turns out mlogger tried stopping the run too soon, the run-start
transition did not finish yet and the error message about trying
to stop a run while another transition is in progress was missing.
(fixed - if another transition is in progress, we try again later)
it also turns out that cm_transition() checks if another transition
is in progress way too late, all the way in the transition thread,
where it cannot return it is an error to mlogger.
(fixed - first thing done in cm_transition() is this check).
while debugging this, I tested the ODB flags "/Logger/Async transitions"
and /Logger/Multithread transitions". It turns out only two transition
types still work from inside mlogger - multithread transition
and detached transition (via the mtransition helper).
the issue is the dead lock between mlogger and frontend. while mlogger
is inside cm_transition(), it is not reading the SYSTEM buffer,
while at the same time frontends are writing into it. If SYSTEM
buffer happens to be pretty full, we dead lock - frontends are waiting
for free space in the SYSTEM buffer do not respond to RPCs, mlogger is not
reading from the SYSTEM and it stuck trying to issue "run stop" RPC
to frontend. (this dead lock is not forever, eventually frontend
is killed by RPC timeout, mlogger survives and stops the run).
this is a well known problem and as solution, mlogger has been using the
multithreaded transitions for years.
now I removed the OBD /Logger/Async transition and /Logger/Multithread
transition flags, instead, there is now a flag /Logger/Detached transitions
set to FALSE by default. Setting it to TRUE will cause mlogger to fork
"mtransition STOP" and "mtransition START" for stopping and starting runs,
this is useful in case there is trouble with multithreading in mlogger.
Anybody some idea what the maximum ODB size can be? In the old days, the linux
kernels had a severe limit on shared memory of usually 8MB, but in the age of
64GB RAM being a standard, we should be able to grow bigger. Tried
odbinit -s 1024MB --cleanup
which went through without complain, even put that value in to .ODB_SIZE.TXT, but
when I started odbedit doing "mem", I only see a size of 1MB. Probably somewhere
deep inside we have a limit which prevents the user to create very large ODBs,
but this should be mentioned more prominently in odbinit. Like "size too large,
maximum allowd is xxx MB".
> Anybody some idea what the maximum ODB size can be?
It turns out ODB size limit is hardwired on db_open_database() at 100 Mbytes.
I now committed an improved error message for this.
I confirm that "odbinit -s 100MB" works and creates ODB with 50 Mbyte data area and 50
Mbyte key area.
> in the age of 64GB RAM being a standard, we should be able to grow bigger ...
I agree, I think we can safely bump the limit from 100 Mbytes to 1 Gbyte, maybe 1.5 or
1.99 Gbytes. Above that we run into 32-bit/31-bit cleanliness problems.
And creating extra large 1 GB ODB but using only a few megabytes will not waste any
RAM, because the .ODB.SHM file is demand-paged and non-used parts of ODB will not be
mapped into RAM. (It will waste disk space, file .ODB.SHM will be 1 GByte size).
However, 1 GByte (FPGA based) and 4-8 GByte (Raspberry Pi & co) machines are again
becoming popular and relevant for running MIDAS, and they have very slow "disk"
subsystems, with NAND, SD and USB flash, so we should not go crazy here.
> odbinit -s 1024MB --cleanup
there is a bug in odbinit, if initial odbinit fails, ODB with default size is creates,
and original rejected ODB size is written to .ODB_SIZE.TXT (an inconsistency).
bitbucket bug 328
> [ how do I resize ODB ??? ]
we need odbresize. bitbucket bug 329.
> > > > odbedit can now save ODB in JSON-formatted files.
> > Values of types [...] Edm.Single, Edm.Double, and Edm.Decimal are represented as JSON numbers,
> except for NaN, INF, and –INF which are represented as strings "NaN", "INF" and "-INF".
Per xkcd, there is a new json standard "json5". In addition to other things, numeric
values NaN, +Infinity and -Infinity are encoded as literals NaN, Infinity and -Infinity (without quotes):
Good discussion of this mess here: