25 Nov 2025, Konstantin Olchanski, Bug Fix, fixed db_find_keys()
|
Function db_find_keys() added by person unnamed in April 2020 never worked correctly, it is now fixed,
repaired, also unsafe strcpy() replaced by mstrlcpy().
This function is used by msequencer ODBSet function and by odbedit "set" command.
Under all conditions it returned DB_NO_KEYS, only two use cases actually worked:
set runinfo/state 1 <--- no match pattern - works
set run*/state 1 <--- match multiple subdirectories - works
set runinfo/stat* 1 <--- bombs out with DB_NO_KEY
set run*/stat* 1 <--- bombs out with DB_NO_KEY
All four use cases now work.
commit b5b151c9bc174ca5fd71561f61b4288c40924a1a
K.O. |
25 Nov 2025, Konstantin Olchanski, Bug Fix, ODB update, branch feature/db_delete_key merged into develop
|
> Thanks for the fixes, which I all approve.
>
> There is still a "follow_links" in midas_c_compat.h line 70 for Python. Probably Ben has to look into that. Also
> client.py has it.
Correct, Ben will look at this on the python side.
And I will be updating mvodb soon and fix it there.
K.O. |
25 Nov 2025, Konstantin Olchanski, Bug Report, Cannot edit values in a subtree containing only a single array of BOOLs using the ODB web interface
|
> Thanks --- it looks like this commit resolves the issue for us ...
> Thanks to Ben Smith for pointing us at exactly the right commit
I would like to take the opportunity to encourage all to report bug fixes like this one to this mailing list.
This looks like a serious bug, many midas users would like to know when it was introduced, when found, when fixed
and who takes the credit.
K.O. |
25 Nov 2025, Konstantin Olchanski, Info, switch midas to c++17
|
>
> > I notice the cmake does not actually pass "-std=c++17" to the c++ compiler, and on U-20, it is likely
> > the default c++14 is used. cmake always does the wrong thing and this will need to be fixed later.
>
> set(CMAKE_CXX_STANDARD 17)
> set(CMAKE_CXX_STANDARD_REQUIRED ON)
>
We used to have this, it is not there now.
>
> Get it to do the right thing?
>
Unlikely, as Stefan reported, asking for C++17 yields -std=gnu++17 which is close enough, but not the same
thing.
For now, it does not matter, U-22 and U-24 are c++17 by default, if somebody accidentally commits c++20
code, builds will fail and somebody will catch it and complain, plus the weekly bitbucket build will bomb-
out.
On U-20, default is c++14 and builds will start bombing out as soon as we commit some c++17 code.
el7 builds have not worked for some time now (a bizarre mysterious error)
el8, el9, el10 likely same situation as Ubuntu.
macos, not sure.
K.O. |
25 Nov 2025, Konstantin Olchanski, Info, switch midas to c++17
|
> target_compile_features(<target> PUBLIC cxx_std_17)
> which correctly causes a
> c++ -std=gnu++17 ...
I think this is set in a couple of places, yet I do not see any -std=xxx flag passed to the compiler.
(and I am not keen on spending a full day fighting cmake)
(btw, -std=c++17 and -std=gnu++17 are not the same thing, I am not sure how well GNU extensions are supported on
macos)
K.O. |
25 Nov 2025, Konstantin Olchanski, Suggestion, manalyzer root output file with custom filename including run number
|
Hi, Jonas, thank you for reminding me about this. I hope to work on manalyzer in the next few weeks and I will review the ROOT output file name scheme.
K.O.
> Hi all,
>
> Could you please get back to me about whether something like my earlier suggestion might be considered, or if I should set up some workaround to rename files at EOR for our experiments?
>
> https://daq00.triumf.ca/elog-midas/Midas/3042 :
> -----------------------------------------------
> > Hi all,
> >
> > Would it be possible to extend manalyzer to support custom .root file names that include the run number?
> >
> > As far as I understand, the current behavior is as follows:
> > The default filename is ./root_output_files/output%05d.root , which can be customized by the following two command line arguments.
> >
> > -Doutputdirectory: Specify output root file directory
> > -Ooutputfile.root: Specify output root file filename
> >
> > If an output file name is specified with -O, -D is ignored, so the full path should be provided to -O.
> >
> > I am aiming to write files where the filename contains sufficient information to be unique (e.g., experiment, year, and run number). However, if I specify it with -O, this would require restarting manalyzer after every run; a scenario that I would like to avoid if possible.
> >
> > Please find a suggestion of how manalyzer could be extended to introduce this functionality through an additional command line argument at
> > https://bitbucket.org/krieger_j/manalyzer/commits/24f25bc8fe3f066ac1dc576349eabf04d174deec
> >
> > Above code would allow the following call syntax: ' ./manalyzer.exe -O/data/experiment1_%06d.root --OutputNumbered '
> > But note that as is, it would fail if a user specifies an incompatible format such as -Ooutput%s.root .
> >
> > So a safer, but less flexible option might be to instead have the user provide only a prefix, and then attach %05d.root in the code.
> >
> > Thank you for considering these suggestions! |
25 Nov 2025, Konstantin Olchanski, Suggestion, Improve process for adding new variables that can be shown in history plots
|
> One aspect of the MIDAS history plotting I find frustrating is the sequence for adding a new history
> variable and then plotting them. ...
this has been a problem in MIDAS for a very long time, we have tried and failed to fix/streamline/improve
it many times and obviously failed. many times.
this is what must happen when adding a new history variable:
1) new /eq/xxx/variables/vvv entry must show up in ODB
1a) add the code for the new data to the frontend
1b) start the frontend
1c) if new variable is added in the frontend init() method, it will be created in ODB, done.
1d) if new variable is added by the event readout code (i.e. via MIDAS event data bank automatically
written to ODB by RO_ODB flags), then we need to start a run.
1e) if this is not periodic event, but beam event or laser event or some other triggered event, we must
also turn on the beam, turn on the laser, etc.
1z) observe that ODB entry exists
3) mlogger must discover this new ODB entry:
3a) mlogger used to rescan ODB each time something in ODB changes, this code was removed
3b) mlogger used to rescan ODB each time a new run is started, this code was removed
3c) mlogger rescans ODB each time it is restarted, this still works.
so sequence is like this: modify, restart frontend, starts a run, stop the run, observe odb entry is
created, restart mlogger, observe new mhf files are created in the history directory.
4) mhttpd must discover that a new mhf file now exists, read it's header to discover history event and
variable names and make them available to the history panel editor.
it is not clear to me that this part currently works:
4a) mhttpd caches the history event list and will not see new variables unless this cache is updated.
4b) when web history panel editor is opened, it is supposed to tell mhttpd to update the cache. I am
pretty sure it worked when I wrote this code...
4c) but obviously it does not work now.
restarting mhttpd obviously makes it load the history data anew, but there is no button to make it happen
on the MIDAS web pages.
so it sounds like I have to sit down and at least retest this whole scheme to see that it works at least
in some way.
then try to improve it:
a) the frontend dance in (1) is unavoidable
b) mlogger must be restarted, I think Stefan and myself agree on this. In theory we could add a web page
button to call an mlogger RPC and have it reload the history. but this button already exists, it's called
"restart mlogger".
c) newly create history event should automatically show up in the history panel editor without any
additional user action
d) document the two intermediate debugging steps:
d1) check that the new variable was created in ODB
d2) check that mlogger created (and writes to) the new history file
this is how I see it and I am open to suggestion, changes, improvements, etc.
K.O. |
14 Feb 2020, Konrad Briggl, Forum, Writting Midas Events via FPGAs
|
Hello Stefan,
is there a difference for the later data processing (after writing the ring buffer blocks)
if we write single events or multiple in one rb_get_wp - memcopy - rb_increment_wp cycle?
Both Marius and me have seen some inconsistencies in the number of events produced that is reported in the status page when writing multiple events in one go,
so I was wondering if this is due to us treating the buffer badly or the way midas handles the events after that.
Given that we produce the full event in our (FPGA) domain, an option would be to always copy one event from the dma to the midas-system buffer in a loop.
The question is if there is a difference (for midas) between
[pseudo code, much simplified]
while(dma_read_index < last_dma_write_index){
if(rb_get_wp(pdata)!=SUCCESS){
dma_read_index+=event_size;
continue;
}
copy_n(dma_buffer, pdata, event_size);
rb_increment_wp(event_size);
dma_read_index+=event_size;
}
and
while(dma_read_index < last_dma_write_index){
if(rb_get_wp(pdata)!=SUCCESS){
...
};
total_size=max_n_events_that_fit_in_rb_block();
copy_n(dma_buffer, pdata, total_size);
rb_increment_wp(total_size);
dma_read_index+=total_size;
}
Cheers,
Konrad
> The rb_xxx function are (thoroughly tested!) robust against high data rate given that you use them as intended:
>
> 1) Once you create the ring buffer via rb_create(), specify the maximum event size (overall event size, not bank size!). Later there is no protection any more, so if you obtain pdata from rb_get_wp, you can of course write 4GB to pdata, overwriting everything in your memory, causing a total crash. It's your responsibility to not write more bytes into pdata then
> what you specified as max event size in rb_create()
>
> 2) Once you obtain a write pointer to the ring buffer via rb_get_wp, this function might fail when the receiving side reads data slower than the producing side, simply because the buffer is full. In that case the producing side has to wait until space is freed up in the buffer by the receiving side. If your call to rb_get_wp returns DB_TIMEOUT, it means that the
> function did not obtain enough free space for the next event. In that case you have to wait (like ss_sleep(10)) and try again, until you succeed. Only when rb_get_wp() returns DB_SUCCESS, you are allowed to write into pdata, up to the maximum event size specified in rb_create of course. I don't see this behaviour in your code. You would need something
> like
>
> do {
> status = rb_get_wp(rbh, (void **)&pdata, 10);
> if (status == DB_TIMEOUT)
> ss_sleep(10);
> } while (status == DB_TIMEOUT);
>
> Best,
> Stefan
>
>
> > Dear all,
> >
> > we creating Midas events directly inside a FPGA and send them off via DMA into the PC RAM. For reading out this RAM via Midas the FPGA sends as a pointer where it has written the last 4kB of data. We use this pointer for telling the ring buffer of midas where the new events are. The buffer looks something like:
> >
> > // event 1
> > dma_buf[0] = 0x00000001; // Trigger and Event ID
> > dma_buf[1] = 0x00000001; // Serial number
> > dma_buf[2] = TIME; // time
> > dma_buf[3] = 18*4-4*4; // event size
> > dma_buf[4] = 18*4-6*4; // all bank size
> > dma_buf[5] = 0x11; // flags
> > // bank 0
> > dma_buf[6] = 0x46454230; // bank name
> > dma_buf[7] = 0x6; // bank type TID_DWORD
> > dma_buf[8] = 0x3*4; // data size
> > dma_buf[9] = 0xAFFEAFFE; // data
> > dma_buf[10] = 0xAFFEAFFE; // data
> > dma_buf[11] = 0xAFFEAFFE; // data
> > // bank 1
> > dma_buf[12] = 0x1; // bank name
> > dma_buf[12] = 0x46454231; // bank name
> > dma_buf[13] = 0x6; // bank type TID_DWORD
> > dma_buf[14] = 0x3*4; // data size
> > dma_buf[15] = 0xAFFEAFFE; // data
> > dma_buf[16] = 0xAFFEAFFE; // data
> > dma_buf[17] = 0xAFFEAFFE; // data
> >
> > // event 2
> > .....
> >
> > dma_buf[fpga_pointer] = 0xXXXXXXXX;
> >
> >
> > And we do something like:
> >
> > while{true}
> > // obtain buffer space
> > status = rb_get_wp(rbh, (void **)&pdata, 10);
> > fpga_pointer = fpga.read_last_data_add();
> >
> > wlen = last_fpga_pointer - fpga_pointer; \\ in 32 bit words
> > copy_n(&dma_buf[last_fpga_pointer], wlen, pdata);
> > rb_status = rb_increment_wp(rbh, wlen * 4); \\ in byte
> >
> > last_fpga_pointer = fpga_pointer;
> >
> > Leaving the case out where the dma_buf wrap around this works fine for a small data rate. But if we increase the rate the fpga_pointer also increases really fast and wlen gets quite big. Actually it gets bigger then max_event_size which is checked in rb_increment_wp leading to an error.
> >
> > The problem now is that the event size is actually not to big but since we have multi events in the buffer which are read by midas in one step. So we think in this case the function rb_increment_wp is comparing actually the wrong thing. Also increasing the max_event_size does not help.
> >
> > Remark: dma_buf is volatile so memcpy is not possible here.
> >
> > Cheers,
> > Marius |
02 Mar 2007, Kevin Lynch, Forum, event builder scalability
|
> Hi there:
> I have a question if there's anybody out there running MIDAS with event builder
> that assembles events from more that just a few front ends (say on the order of
> 0x10 or more)?
> Any experiences with scalability?
>
> Cheers
> Piotr
Mulan (which you hopefully remember with great fondness :-) is currently running
around ten frontends, six of which produce data at any rate. If I'm remembering
correctly, the event builder handles about 30-40MB/s. You could probably ping Tim
Gorringe or his current postdoc Volodya Tishenko (tishenko@pa.uky.edu) if you want
more details. Volodya solved a significant number of throughput related
bottlenecks in the year leading up to our 2006 run. |
15 Dec 2016, Kevin Giovanetti, Bug Report, midas.h error
|
creating a frontend on MAC Sierra OSX 10
include the midas.h file and when compiling with XCode I get an error based on
this entry in the midas.h include
#if !defined(OS_IRIX) && !defined(OS_VMS) && !defined(OS_MSDOS) &&
!defined(OS_UNIX) && !defined(OS_VXWORKS) && !defined(OS_WINNT)
#error MIDAS cannot be used on this operating system
#endif
Perhaps I should not use Xcode?
Perhaps I won't need Midas.h?
The MIDAS system is running on my MAC but I need to add a very simple front end
for testing and I encounted this error. |
30 Oct 2018, Joseph McKenna, Bug Report, Side panel auto-expands when history page updates
|
One can collapse the side panel when looking at history pages with the button in
the top left, great! We want to see many pages so screen real estate is important
The issue we face is that when the page refreshes, the side panel expands. Can
we make the panel state more 'sticky'?
Many thanks
Joseph (ALPHA)
Version: 2.1
Revision: Mon Mar 19 18:15:51 2018 -0700 - midas-2017-07-c-197-g61fbcd43-dirty
on branch feature/midas-2017-10 |
31 Oct 2018, Joseph McKenna, Bug Report, Side panel auto-expands when history page updates
|
> >
> >
> > One can collapse the side panel when looking at history pages with the button in
> > the top left, great! We want to see many pages so screen real estate is important
> >
> > The issue we face is that when the page refreshes, the side panel expands. Can
> > we make the panel state more 'sticky'?
> >
> > Many thanks
> > Joseph (ALPHA)
> >
> > Version: 2.1
> > Revision: Mon Mar 19 18:15:51 2018 -0700 - midas-2017-07-c-197-g61fbcd43-dirty
> > on branch feature/midas-2017-10
>
> Hi Joseph,
>
> In principle a page refresh should now not be necessary, since pages should reload automatically
> the contents which changes. If a custom page needs a reload, it is not well designed. If necessary, I
> can explain the details.
>
> Anyhow I implemented your "stickyness" of the side panel in the last commit to the develop branch.
>
> Best regards,
> Stefan
Hi Stefan,
I apologise for miss using the word refresh. The re-appearing sidebar was also seen with the automatic
reload, I have implemented your fix here and it now works great!
Thank you very much!
Joseph |
14 Oct 2019, Joseph McKenna, Forum, tmfe.cxx - Future frontend design
|
Hi,
I have been looking at the 2019 workshop slides, I am interested in the C++ future of MIDAS.
I am quite interested in using the object oriented
ALPHA will start data taking in 2021 |
18 Oct 2019, Joseph McKenna, Info, sysmon: New system monitor and performance logging frontend added to MIDAS
|
I have written a system monitor tool for MIDAS, that has been merged in the develop branch today: sysmon
https://bitbucket.org/tmidas/midas/pull-requests/8/system-monitoring-a-new-frontend-to-log/diff
To use it, simply run the new program
sysmon
on any host that you want to monitor, no configuring required.
The program is a frontend for MIDAS, there is no need for configuration, as upon initialisation it builds a history display for you. Simply run one instance per machine you want to monitor. By default, it only logs once per 10 seconds.
The equipment name is derived from the hostname, so multiple instances can be run across multiple machines without conflict. A new history display will be created for each host.
sysmon uses the /proc pseudo-filesystem, so unfortunately only linux is supported. It does however work with multiple architectures, so x86 and ARM processors are supported.
If the build machine has NVIDIA drivers installed, there is an additional version of sysmon that gets built: sysmon-nvidia. This will log the GPU temperature and usage, as well as CPU, memory and swap. A host should only run either sysmon or sysmon-nvidia
elog:1727/1 shows the History Display generated by sysmon-nvidia. sysmon would only generate the first two displays (sysmon/localhost and sysmon/localhost-CPU) |
03 Dec 2019, Joseph McKenna, Info, mfe.c: MIDAS frontend's 'Equipment name' can embed hostname, determined at run-time
|
A little advertised feature of the modifications needed support the msysmon program is
that MIDAS equipment names can support the injecting of the hostname of the system
running the frontend at runtime (register_equipment(void)).
https://midas.triumf.ca/MidasWiki/index.php/Equipment_List_Parameters#Equipment_Name
A special string ${HOSTNAME} can be put in any position in the equipment name. It will
be replaced with the hostname of the computer running the frontend at run-time. Note,
the frontend_name string will be trimmed down to 32 characters.
Example usage: msysmon
EQUIPMENT equipment[] = {
{ "${HOSTNAME}_msysmon", /* equipment name */ {
EVID_MONITOR, 0, /* event ID, trigger mask */
"SYSTEM", /* event buffer */
EQ_PERIODIC, /* equipment type */
0, /* event source */
"MIDAS", /* format */
TRUE, /* enabled */
RO_ALWAYS, /* Read when running */
10000, /* poll every so milliseconds */
0, /* stop run after this event limit */
0, /* number of sub events */
1, /* history period */
"", "", ""
},
read_system_load,/* readout routine */
},
{ "" }
}; |
01 May 2020, Joseph McKenna, Forum, Taking MIDAS beyond 64 clients
|
Hi all,
I have been experimenting with a frontend solution for my experiment
(ALPHA). The intention to replace how we log data from PCs running LabVIEW.
I am at the proof of concept stage. So far I have some promising
performance, able to handle 10-100x more data in my test setup (current
limitations now are just network bandwith, MIDAS is impressively efficient).
==========================================================================
Our experiment has many PCs using LabVIEW which all log to MIDAS, the
experiment has grown such that we need some sort of load balancing in our
frontend.
The concept was to have a 'supervisor frontend' and an array of 'worker
frontend' processes.
-A LabVIEW client would connect to the supervisor, then be referred to a
worker frontend for data logging.
-The supervisor could start a 'worker frontend' process as the demand
required.
To increase accountability within the experiment, I intend to have a 'worker
frontend' per PC connecting. Then any rouge behavior would be clear from the
MIDAS frontpage.
Presently there around 20-30 of these LabVIEW PCs, but given how the group
is growing, I want to be sure that my data logging solution will be viable
for the next 5-10 years. With the increased use of single board computers, I
chose the target of benchmarking upto 1000 worker frontends... but I quickly
hit the '64 MAX CLIENTS' and '64 RPC CONNECTION' limit. Ok...
branching and updating these limits:
https://bitbucket.org/tmidas/midas/branch/experimental-beyond_64_clients
I have two commits.
1. update the memory layout assertions and use MAX_CLIENTS as a variable
https://bitbucket.org/tmidas/midas/commits/302ce33c77860825730ce48849cb810cf
366df96?at=experimental-beyond_64_clients
2. Change the MAX_CLIENTS and MAX_RPC_CONNECTION
https://bitbucket.org/tmidas/midas/commits/f15642eea16102636b4a15c8411330969
6ce3df1?at=experimental-beyond_64_clients
Unintended side effects:
I break compatibility of existing ODB files... the database layout has
changed and I read my old ODB as corrupt. In my test setup I can start from
scratch but this would be horrible for any existing experiment.
Edit: I noticed 'make testdiff' pipeline is failing... also fails locally...
investigating
Early performance results:
In early tests, ~700 PCs logging 10 unique arrays of 10 doubles into
Equipment variables in the ODB seems to perform well... All transactions
from client PCs are finished within a couple of ms or less
==========================================================================
Questions:
Does the community here have strong opinions about increasing the
MAX_CLIENTS and MAX_RPC_CONNECTION limits?
Am I looking at this problem in a naive way?
Potential solutions other than increasing the MAX_CLIENTS limit:
-Make worker threads inside the supervisor (not a separate process), I am
using TMFE, so I can dynamically create equipment. I have not yet taken a
deep dive into how any multithreading is implemented
-One could have a round robin system to load balance between a limited pool
of 'worker frontend' proccesses. I don't like this solution as I want to
able to clearly see which client PCs have been setup to log too much data
========================================================================== |
02 May 2020, Joseph McKenna, Forum, Taking MIDAS beyond 64 clients
|
Thank you very much for feedback.
I am satisfied with not changing the 64 client limit. I will look at re-writing my frontend to spawn threads rather than
processses. The load of my frontend is low, so I do not anticipate issues with a threaded implementation.
In this threaded scenario, it will be a reasonable amount of time until ALPHA bumps into the 64 client limit.
If it avoids confusion, I am happy for my experimental branch 'experimental-beyond_64_clients' to be deleted.
Perhaps a item for future discussion would be for the odbinit program to be able to 'upgrade' the ODB and enable some backwards
compatibility.
Thanks again
Joseph |
19 Nov 2020, Joseph McKenna, Forum, History plot consuming too much memory
|
A user reported an issue that if they were to plot some history data from
2019 (a range of one day), the plot would spend ~4 minutes loading then
crash the browser tab. This seems to effect chrome (under default settings)
and not firefox
I can reproduce the issue, "Data Being Loaded" shows, then the page and
canvas loads, then all variables get a correct "last data" timestamp, then
the 'Updating data ...' status shows... then the tab crashes (chrome)
It seems that the browser is loading all data until the present day (maybe 4
Gb of data in this case). In chrome the tab then crashes. In firefox, I do
not suffer the same crash, but I can see the single tab is using ~3.5 Gb of
RAM
Tested with midas-2020-08-a up until the HEAD of develop
I could propose the user use firefox, or increase the memory limit in
chrome, however are there plans to limit the data loaded when specifically
plotting between two dates? |
20 Nov 2020, Joseph McKenna, Forum, History plot consuming too much memory
|
Poking at the behavior of this, its fairly clear the slow response is from the data
being loaded off an HDD, when we upgrade this system we will allocate enough SSD
storage for the histories.
Using Firefox has resolved this issue for the user's project here
Taking this down a tangent, I have a mild concern that a user could temporarily
flood our gigabit network if we do have faster disks to read the history data. Have
there been any plans or thoughts on limiting the bandwidth users can pull from
mhttpd? I do not see this as a critical item as I can plan the future network
infrastructure at the same time as the next system upgrade (putting critical data
taking traffic on a separate physical network).
> Of course one can only
> load that specific window, but when the user then scrolls right, one has to
> append new data to the "right side" of the array stored in the browser. If the
> user jumps to another location, then the browser has to keep track of which
> windows are loaded and which windows not, making the history code much more
> complicated. Therefore I'm only willing to spend a few days of solid work
> if this really becomes a problem.
For now the user here has retrieved all the data they need, and I can direct others
towards mhist in the near future. Being able to load just a specific window would be
very useful in the future, but I comprehend how it would be a spike in complexity. |
27 May 2021, Joseph McKenna, Info, MIDAS Messenger - A program to send MIDAS messages to Discord, Slack and or Mattermost
|
I have created a simple program that parses the message buffer in MIDAS and
sends notifications by webhook to Discord, Slack and or Mattermost.
Active pull request can be found here:
https://bitbucket.org/tmidas/midas/pull-requests/21
Its written in python and CMake will install it in bin (if the Python3 binary
is found by cmake). The only dependency outside of the MIDAS python library is
'requests', full documentation are in the mmessenger.md |
|