26 Nov 2025, Lars Martin, Suggestion, mvodb WS and family type matching
|
This is not a bug per se, but I find it a little odd that the MVOdb functions RS,
RSA, RSAI, and WSA use std::string as their type, while WS ans WSAI use const
char*
Seems to me like simple overloading a la
void WS(const char* varname, const std::string v, MVOdbError* error = NULL){
WS(varname, v.c_str(), v.size(), error);
}
should be all that's needed, right? |
27 Nov 2025, Konstantin Olchanski, Suggestion, mvodb WS and family type matching
|
> This is not a bug per se, but I find it a little odd that the MVOdb functions RS,
> RSA, RSAI, and WSA use std::string as their type, while WS ans WSAI use const
> char*
>
> Seems to me like simple overloading a la
> void WS(const char* varname, const std::string v, MVOdbError* error = NULL){
> WS(varname, v.c_str(), v.size(), error);
> }
>
> should be all that's needed, right?
No short answer to this one.
This situation is an excellent example of c++ bloat. Reduced to bare basics:
1) "naive" c++ code:
void foo(std::string xxx) { ... };
int main() { foo("bar"); }
nominally:
a new string object is created to hold "bar"
a new string object is copy-created to pass it as argument to foo()
result:
two object creations (two calls to malloc + constructors)
plus memcpy() of string data. (compiler may or may not optimize the 2nd string)
2) "advanced" c++ code:
void foo(const std::string& xxx) { ... };
int main() { foo("bar"); }
copy-created 2nd string is avoided, but string object to hold "bar" is still must be
made, 1 malloc(), 1 memcpy().
3) "pure C" code:
void foo(const char xxx) { ... };
int main() { foo("bar"); }
address of "bar" (placed in read-only memory) is passed in a register, no malloc(), no
memcpy(), nada, zilch.
One can argue that bloat does not matter, "just buy a bigger computer".
This ignores the fact that malloc() is quite expensive, nominally requires taking a
mutex, and suddenly multiple threads calling foo() are unexpectedly serialized against
the malloc() internal mutex.
I guess you can have an advanced malloc() that uses per-thread memory pools, but now
instead of deterministic "always take a lock", we have non-deterministic "take a lock
sometimes, when per-thread memory pools decide to jockey for more memory".
This type of non-deterministic behaviour is bad for real-time applications.
Ultimately it boils down to personal style, I prefer "C-like" efficiency and
transparency, when I call foo() it is obvious there will be no hidden malloc(), no
hidden mutex.
I guess mvodb could have "const std::string&" version of each "const char*" function,
as if there is too few functions there already...
This problem is not isolated to mvodb, but pertains to any API, including midas.h.
I would say, if most function calls are foo("abc"); then "const char*" version is
sufficient, if most calls are foo(string + "something"); then "const std::string&" is
more appropriate.
K.O. |
27 Nov 2025, Stefan Ritt, Suggestion, mvodb WS and family type matching
|
> 2) "advanced" c++ code:
>
> void foo(const std::string& xxx) { ... };
> int main() { foo("bar"); }
>
> copy-created 2nd string is avoided, but string object to hold "bar" is still must be
> made, 1 malloc(), 1 memcpy().
Are you sure about this? I always thought that foo only receives a pointer to xxx which it puts on the stack, so
no additional malloc/free is involved.
Have a look here: https://en.cppreference.com/w/cpp/language/reference.html
It says "References are not objects; they do not necessarily occupy storage".
Stefan |
24 Sep 2025, Thomas Lindner, Suggestion, Improve process for adding new variables that can be shown in history plots
|
Documenting a discussion I had with Konstantin a while ago.
One aspect of the MIDAS history plotting I find frustrating is the sequence for adding a new history
variable and then plotting them. At least for me ,the sequence for a new variable is:
1) Add a new named variable to a MIDAS bank; compile the frontend and restart it; check that the new
variable is being displayed correctly in MIDAS equipment pages.
2) Stop and restart programs; usually I do
- stop and restart mlogger
- stop and restart mhttpd
- stop and restart mlogger
- stop and restart mhttpd
3) Start adding the new variable to the history plots.
My frustration is with step 2, where the logger and web server need to be restarted multiple times.
I think that only one of those programs actually needs to be restart twice, but I can never remember
which one, so I restart both programs twice just to be sure.
I don't entirely understand the sequence of what happens with these restarts so that the web server
becomes aware of what new variables are in the history system; so I can't make a well-motivated
suggestion.
Ideally it would be nice if step 2 only required restarting the mlogger and mhttpd automatically
became aware of what new variables were in the system.
But even just having a single restart of mlogger, then mhttpd would be an improvement on the current
situation and easier to explain to users.
Hopefully this would not be a huge amount of work. |
25 Nov 2025, Konstantin Olchanski, Suggestion, Improve process for adding new variables that can be shown in history plots
|
> One aspect of the MIDAS history plotting I find frustrating is the sequence for adding a new history
> variable and then plotting them. ...
this has been a problem in MIDAS for a very long time, we have tried and failed to fix/streamline/improve
it many times and obviously failed. many times.
this is what must happen when adding a new history variable:
1) new /eq/xxx/variables/vvv entry must show up in ODB
1a) add the code for the new data to the frontend
1b) start the frontend
1c) if new variable is added in the frontend init() method, it will be created in ODB, done.
1d) if new variable is added by the event readout code (i.e. via MIDAS event data bank automatically
written to ODB by RO_ODB flags), then we need to start a run.
1e) if this is not periodic event, but beam event or laser event or some other triggered event, we must
also turn on the beam, turn on the laser, etc.
1z) observe that ODB entry exists
3) mlogger must discover this new ODB entry:
3a) mlogger used to rescan ODB each time something in ODB changes, this code was removed
3b) mlogger used to rescan ODB each time a new run is started, this code was removed
3c) mlogger rescans ODB each time it is restarted, this still works.
so sequence is like this: modify, restart frontend, starts a run, stop the run, observe odb entry is
created, restart mlogger, observe new mhf files are created in the history directory.
4) mhttpd must discover that a new mhf file now exists, read it's header to discover history event and
variable names and make them available to the history panel editor.
it is not clear to me that this part currently works:
4a) mhttpd caches the history event list and will not see new variables unless this cache is updated.
4b) when web history panel editor is opened, it is supposed to tell mhttpd to update the cache. I am
pretty sure it worked when I wrote this code...
4c) but obviously it does not work now.
restarting mhttpd obviously makes it load the history data anew, but there is no button to make it happen
on the MIDAS web pages.
so it sounds like I have to sit down and at least retest this whole scheme to see that it works at least
in some way.
then try to improve it:
a) the frontend dance in (1) is unavoidable
b) mlogger must be restarted, I think Stefan and myself agree on this. In theory we could add a web page
button to call an mlogger RPC and have it reload the history. but this button already exists, it's called
"restart mlogger".
c) newly create history event should automatically show up in the history panel editor without any
additional user action
d) document the two intermediate debugging steps:
d1) check that the new variable was created in ODB
d2) check that mlogger created (and writes to) the new history file
this is how I see it and I am open to suggestion, changes, improvements, etc.
K.O. |
26 Nov 2025, Thomas Lindner, Suggestion, Improve process for adding new variables that can be shown in history plots
|
> 3) mlogger must discover this new ODB entry:
>
> 3a) mlogger used to rescan ODB each time something in ODB changes, this code was removed
> 3b) mlogger used to rescan ODB each time a new run is started, this code was removed
> 3c) mlogger rescans ODB each time it is restarted, this still works.
>
> so sequence is like this: modify, restart frontend, starts a run, stop the run, observe odb entry is
> created, restart mlogger, observe new mhf files are created in the history directory.
I assume that mlogger rescanning ODB is somewhat intensive process; and that's why we don't want rescanning to
happen every time the ODB is changed?
Stopping/restarting mlogger is okay. But would it be better to have some alternate way to force mlogger to
rescan the ODB? Like an odbedit command like 'mlogger_rescan'; or some magic ODB key to force the rescan. I
guess neither of these options is really any easier for the developer. It just seems awkward to need to restart
mlogger for this.
It would be great if mhttpd can be fixed so that it updates the cache when history editor is opened. |
27 Nov 2025, Stefan Ritt, Suggestion, Improve process for adding new variables that can be shown in history plots
|
> I assume that mlogger rescanning ODB is somewhat intensive process; and that's why we don't want rescanning to
> happen every time the ODB is changed?
A rescan maybe takes some tens of milliseconds. Something you can do on every run, but not on every ODB change (like writing to the slow control values).
We would need a somehow more clever code which keeps a copy of the variable names for each equipment. If the names change or the array size changes,
the scan can be triggered.
> Stopping/restarting mlogger is okay. But would it be better to have some alternate way to force mlogger to
> rescan the ODB? Like an odbedit command like 'mlogger_rescan'; or some magic ODB key to force the rescan. I
> guess neither of these options is really any easier for the developer. It just seems awkward to need to restart
> mlogger for this.
Indeed. But whatever "new" we design for the scan will users complain "last week it was enough to restart the logger, now what do I have to do". So nothing
is perfect. But having a button in the ODB editor like "Rebuild history database" might look more elegant. One issue is that it needs special treatment, since
the logger (in the Mu3e experiment) needs >10s for the scan, so a simple rpc call will timeout.
Let's see what KO has to say on this.
Best,
Stefan |
27 Nov 2025, Konstantin Olchanski, Suggestion, Improve process for adding new variables that can be shown in history plots
|
> > I assume that mlogger rescanning ODB is somewhat intensive process; and that's why we don't want rescanning to
> > happen every time the ODB is changed?
>
> A rescan maybe takes some tens of milliseconds. Something you can do on every run, but not on every ODB change (like writing to the slow control values).
> We would need a somehow more clever code which keeps a copy of the variable names for each equipment. If the names change or the array size changes,
> the scan can be triggered.
>
That's right, scanning ODB for history changes is essentially free.
Question is what do we do if something was added or removed.
I see two ways to think about it:
1) history is independent from "runs", we see a change, we apply it (even if it takes 10 sec or 2 minutes).
2) "nothing should change during a run", we must process all changes before we start a run (starting a run takes forever),
and we must ignore changes during a run (i.e. updated frontend starts to write new data to history). (this is why
the trick to "start a new run twice" used to work).
>
> > Stopping/restarting mlogger is okay. But would it be better to have some alternate way to force mlogger to
> > rescan the ODB?
>
It is "free" to rescan ODB every 10 second or so. Then we can output a midas message "please restart the logger",
and set an ODB flag, then when user opens the history panel editor, it will see this flag
and tell the user "please restart the logger to see the latest changes in history". It can even list
the specific changes, if we want ot be verbose about it.
>
> Indeed. But whatever "new" we design for the scan will users complain "last week it was enough to restart the logger, now what do I have to do". So nothing
> is perfect. But having a button in the ODB editor like "Rebuild history database" might look more elegant. One issue is that it needs special treatment, since
> the logger (in the Mu3e experiment) needs >10s for the scan, so a simple rpc call will timeout.
>
I like the elegance of "just restart the logger".
Having a web page button to tell logger to rescan the history is cumbersome technically,
(web page calls mjsonrpc to mhttpd, mhttpd calls a midas rpc to mlogger "please set a flag to rescan the history",
then web page polls mhttpd to poll mlogger for "are you done yet?". or instead of polling,
deal with double timeouts, in midas rpc to mlogger and mjsronrpc timeout in javascript).
And to avoid violating (2) above, we must tell user "you cannot push this button during a run!".
I say, let's take the low road for now and see if it's good enough:
a) have the history system report any changes in midas.log - "history event added", "new history variable added" (or "renamed"),
this will let user see that their changes to the equipment frontend "took" and flag any accidental/unwanted changes.
b) have mlogger periodically scan ODB and set a "please restart me" flag. observe this flag in the history editor
and tell the user "please restart the logger to see latest changes in the history".
K.O. |
27 Nov 2025, Thomas Lindner, Suggestion, Improve process for adding new variables that can be shown in history plots
|
> > Indeed. But whatever "new" we design for the scan will users complain "last week it was enough to restart the logger, now what do I have to do". So nothing
> > is perfect. But having a button in the ODB editor like "Rebuild history database" might look more elegant. One issue is that it needs special treatment, since
> > the logger (in the Mu3e experiment) needs >10s for the scan, so a simple rpc call will timeout.
> >
>
> I like the elegance of "just restart the logger".
>
> Having a web page button to tell logger to rescan the history is cumbersome technically,
> (web page calls mjsonrpc to mhttpd, mhttpd calls a midas rpc to mlogger "please set a flag to rescan the history",
> then web page polls mhttpd to poll mlogger for "are you done yet?". or instead of polling,
> deal with double timeouts, in midas rpc to mlogger and mjsronrpc timeout in javascript).
>
> And to avoid violating (2) above, we must tell user "you cannot push this button during a run!".
>
> I say, let's take the low road for now and see if it's good enough:
>
> a) have the history system report any changes in midas.log - "history event added", "new history variable added" (or "renamed"),
> this will let user see that their changes to the equipment frontend "took" and flag any accidental/unwanted changes.
>
> b) have mlogger periodically scan ODB and set a "please restart me" flag. observe this flag in the history editor
> and tell the user "please restart the logger to see latest changes in the history".
This seems like a reasonable plan to me (combined with clear documentation).
Thomas |
27 Nov 2025, Stefan Ritt, Suggestion, Improve process for adding new variables that can be shown in history plots
|
> 1) history is independent from "runs", we see a change, we apply it (even if it takes 10 sec or 2 minutes).
>
> 2) "nothing should change during a run", we must process all changes before we start a run (starting a run takes forever),
> and we must ignore changes during a run (i.e. updated frontend starts to write new data to history). (this is why
> the trick to "start a new run twice" used to work).
"nothing should change during a run" violates the action when a user adds a new variable during a run. So if the user does that, they don't
care if things change during a run. Then we can also modify the history DB during the run. Note that some MIDAS installations are purely
slow control (kind of a replacement of LabView, have no runs at all). In those installations runs do not make sense at all, so keeping the
history independent of runs makes sense to me.
> It is "free" to rescan ODB every 10 second or so. Then we can output a midas message "please restart the logger",
> and set an ODB flag, then when user opens the history panel editor, it will see this flag
> and tell the user "please restart the logger to see the latest changes in history". It can even list
> the specific changes, if we want ot be verbose about it.
Sounds good to me.
> I say, let's take the low road for now and see if it's good enough:
>
> a) have the history system report any changes in midas.log - "history event added", "new history variable added" (or "renamed"),
> this will let user see that their changes to the equipment frontend "took" and flag any accidental/unwanted changes.
>
> b) have mlogger periodically scan ODB and set a "please restart me" flag. observe this flag in the history editor
> and tell the user "please restart the logger to see latest changes in the history".
Actually you don't have to actively "scan" the ODB. You have hotlinks to the logger anyway from the equipment variables. All we need
in addition is a hotline to the settings array in the ODB. The logger receives the hotline update, checks if the names changed or got
extended, then flags this as a change.
Stefan |
19 Nov 2025, Stefan Mathis, Forum, Control external process from inside MIDAS
|
Dear all,
I want to control (start / stop / monitor its stdout and stderr) an external process (systemd / EPICS IOC shell script) from within MIDAS.
In order to make this as convenient as possible for the user, I want the process to behave just like any other MIDAS client:
- I can start it from the ODB as a program
- The process gets regularly polled from MIDAS to see whether it is still running
- I can stop the process from the ODB like any other program
- Optional, but highly appreciated: Its stdout and stderr should be a MIDAS message.
Did anyone already solve a similar problem?
Best regards
Stefan |
19 Nov 2025, Nick Hastings, Forum, Control external process from inside MIDAS
|
Hi,
what you describe is exactly how I normally run mhttpd, mlogger, mserver and some other
custom frontend programs. Eg:
[local:T2KGSC:Running]/>ls /programs/Logger/
Required y
Watchdog timeout 100000
Check interval 180000
Start command systemctl --user start mlogger
Auto start n
Auto stop n
Auto restart n
Alarm class AlarmNotify
First failed 0
The only exception is your last point about stdout and stderr
being midas messages. I use journalctl to see these.
Cheers,
Nick.
> I want to control (start / stop / monitor its stdout and stderr) an external process (systemd / EPICS IOC shell script) from within MIDAS.
>
> In order to make this as convenient as possible for the user, I want the process to behave just like any other MIDAS client:
> - I can start it from the ODB as a program
> - The process gets regularly polled from MIDAS to see whether it is still running
> - I can stop the process from the ODB like any other program
> - Optional, but highly appreciated: Its stdout and stderr should be a MIDAS message.
>
> Did anyone already solve a similar problem?
>
> Best regards
> Stefan |
20 Nov 2025, Stefan Mathis, Forum, Control external process from inside MIDAS
|
Thanks a lot,
Nick. Regarding the messages: Zaher showed me that it is possible to simply place a custom log file generated by the systemd next to midas.log - then it shows up next to the "midas" tab in "Messages".
One follow-up question: Is it possible to use the systemctl status for the "Running on host" column? Or does this even happen automatically?
Best regards
Stefan
> Hi,
>
> what you describe is exactly how I normally run mhttpd, mlogger, mserver and some other
> custom frontend programs. Eg:
>
> [local:T2KGSC:Running]/>ls /programs/Logger/
> Required y
> Watchdog timeout 100000
> Check interval 180000
> Start command systemctl --user start mlogger
> Auto start n
> Auto stop n
> Auto restart n
> Alarm class AlarmNotify
> First failed 0
>
> The only exception is your last point about stdout and stderr
> being midas messages. I use journalctl to see these.
>
> Cheers,
>
> Nick.
>
> > I want to control (start / stop / monitor its stdout and stderr) an external process (systemd / EPICS IOC shell script) from within MIDAS.
> >
> > In order to make this as convenient as possible for the user, I want the process to behave just like any other MIDAS client:
> > - I can start it from the ODB as a program
> > - The process gets regularly polled from MIDAS to see whether it is still running
> > - I can stop the process from the ODB like any other program
> > - Optional, but highly appreciated: Its stdout and stderr should be a MIDAS message.
> >
> > Did anyone already solve a similar problem?
> >
> > Best regards
> > Stefan |
20 Nov 2025, Nick Hastings, Forum, Control external process from inside MIDAS
|
Hi,
> Nick. Regarding the messages: Zaher showed me that it is possible to simply place
> a custom log file generated by the systemd next to midas.log - then it shows up
> next to the "midas" tab in "Messages".
Interesting. I'm not familiar with that feature. Do you have link to documentation?
> One follow-up question: Is it possible to use the systemctl status for the
> "Running on host" column? Or does this even happen automatically?
On the programs page that column is populated by the odb key /System/Clients/<PID>/Host
so no. However, there is nothing stopping you from writing your own version of
programs.html to show whatever you want. For example I have a custom programs
page the includes columns to enable/disable and to reset watchdog alarms.
Cheers,
Nick. |
20 Nov 2025, Stefan Mathis, Forum, Control external process from inside MIDAS
|
Hi,
unfortunately I don't have a documentation link to the feature, I just know that it works on my machine ;-) The general idea is that you place a custom whatever.log file in Logger/Data Dir (where midas.log is stored). Then, in the Messages page, there will be a "midas" tab and a "whatever" tab - the latter showing the content of whatever.log. One problem here is that timestamping does not work automatically - you have to prepend every line with the same Hours:Minutes:Seconds.Milliseconds Year/Month/Day format that midas.log is using.
So you have a custom Programs page which does systemctl status on your systemd? Does the status then transfer over automatically to the Status page? Is there an example how to write such a custom page?
Best regards
Stefan
> Hi,
>
> > Nick. Regarding the messages: Zaher showed me that it is possible to simply place
> > a custom log file generated by the systemd next to midas.log - then it shows up
> > next to the "midas" tab in "Messages".
>
> Interesting. I'm not familiar with that feature. Do you have link to documentation?
>
> > One follow-up question: Is it possible to use the systemctl status for the
> > "Running on host" column? Or does this even happen automatically?
>
> On the programs page that column is populated by the odb key /System/Clients/<PID>/Host
> so no. However, there is nothing stopping you from writing your own version of
> programs.html to show whatever you want. For example I have a custom programs
> page the includes columns to enable/disable and to reset watchdog alarms.
>
> Cheers,
>
> Nick. |
24 Nov 2025, Stefan Ritt, Forum, Control external process from inside MIDAS
|
Dear all,
Stefan wants to run an external EPICS driver process as a detached process and somehow "glue" it to midas to control it. Actually a similar requirement led to the development of MIDAS in the '90s. We had too many configuration files lying around, to many process to control and interact together with each other and so on. With the development of MIDAS I wanted to integrate all that. There is one ODB to control an parasitize everything, one central process handling to see if processes are alive, raise an alarm if they die, automatically restart them if necessary and so on. Doing this now externally again is orthogonal to the original design concept of MIDAS and will cause many problems. I therefore strongly recommend to to juggle around with systemctl and syslog, but to make everything a MIDAS process. It's simply a "cm_connect_experiment()" and "cm_disconnect_experiment()" in the end. Then you set
/programs/requited = y
and
/programs/start command = <cmd>
You can set the "alarm class" to raise an alarm if the program crashes, and you will see all messages if you use "cm_msg()" inside the program rather than "printf()". Injecting a separate .log file into the system will show things on the message page, but these messages do not go through the SYSMSG buffer, and cannot received by other programs. Maybe you noticed that mhttpd on the status page always shows the last message it received, which can be very helpful. To see if a program is running, you only need a cm_exist() call, which also exists for custom web pages.
Rather than investing time to re-invent the wheel here, better try to modify your EPICS driver process to become a midas process.
If you have an external process which you absolutely cannot modify, I would rather write a wrapper midas program to start the external process, intercept it's output via a pipe, and put its output properly into the midas message system with cm_msg(). In the main loop of your wrapper function you check the external process via whatever you want, and if it dies trigger an alarm or restart it from your wrapper program. You can then set an alarm on your wrapper program to make sure this one is always running.
Best regards,
StefanR |
27 Nov 2025, Konstantin Olchanski, Forum, Control external process from inside MIDAS
|
> Rather than investing time to re-invent the wheel here, better try to modify your EPICS driver process to
become a midas process.
I am with Stefan on this. Quite a bit of work went into the tmfe c++ framework to make it easy/easier to do
this - take an existing standalone c/c++ program and midas-ize it: in main(), "just add" calls to connect to
midas and to start the midas threads - rpc handler, watchdog, etc.
Alternatively, one can write a midas "stdout+stderr bridge", and start your standalone program
from the programs page like this:
myprogram |& cm_msg_bridge --name "myprogram" (redirect both stdout and stderr to cm_msg_bridge stdin)
cm_msg_bridge would read stdin and put them in cm_msg(). it will connect to midas using the name "myprogram"
to make it show "green" on the status page and it will be stoppable from the programs page.
care will need to be taken for myprogram to die cleanly when stdout and stderr are closed after cm_msg_bridge
exits.
K.O. |
22 Sep 2025, Konstantin Olchanski, Info, switch midas to c++17
|
Following discussions at the MIDAS workshop, we propose to move MIDAS from c++11 to c++17. There is
many new features and we want to start using some of them.
Per my previous message https://daq00.triumf.ca/elog-midas/Midas/3084,
c++17 is available on current MacOS, U-22 and newer, el9 and newer, D-12 and newer.
(ROOT moved to C++17 as of release 6.30 on November 6, 2023)
As I reported earlier, MIDAS already builds with c++23 on U-24, and this move does not require any
actual code changes other than a bump of c++ version in CMakeLists.txt and Makefile.
Please let us know if this change will cause problems or if you think that we should move to an older
c++ (c++14) or newer c++ (c++20 or c++23 or c++26).
If we do not hear anything, we will implement this change in about 2-3 weeks.
K.O. |
23 Sep 2025, Pavel Murat, Info, switch midas to c++17
|
perhaps c++20? - std::format is an immediately useful feature. --regards, Pasha
> Following discussions at the MIDAS workshop, we propose to move MIDAS from c++11 to c++17. There is
> many new features and we want to start using some of them.
>
> Per my previous message https://daq00.triumf.ca/elog-midas/Midas/3084,
> c++17 is available on current MacOS, U-22 and newer, el9 and newer, D-12 and newer.
>
> (ROOT moved to C++17 as of release 6.30 on November 6, 2023)
>
> As I reported earlier, MIDAS already builds with c++23 on U-24, and this move does not require any
> actual code changes other than a bump of c++ version in CMakeLists.txt and Makefile.
>
> Please let us know if this change will cause problems or if you think that we should move to an older
> c++ (c++14) or newer c++ (c++20 or c++23 or c++26).
>
> If we do not hear anything, we will implement this change in about 2-3 weeks.
>
> K.O. |
23 Sep 2025, Konstantin Olchanski, Info, switch midas to c++17
|
> perhaps c++20? - std::format is an immediately useful feature. --regards, Pasha
confirmed. std::format is an improvement over K&R C printf().
but seems unavailable on U-20 and older, requires --std=c++20 on U-24 and MacOS.
but also available as a standalone library: https://github.com/fmtlib/fmt
myself, I use printf() and msprintf(), I think std::format is in the "too little, too late to save C++"
department.
K.O. |
23 Sep 2025, Pavel Murat, Info, switch midas to c++17
|
> > perhaps c++20? - std::format is an immediately useful feature. --regards, Pasha
>
> confirmed. std::format is an improvement over K&R C printf().
>
> but seems unavailable on U-20 and older, requires --std=c++20 on U-24 and MacOS.
agreed! - availability is significantly more important. -- regards, Pasha |
06 Nov 2025, Konstantin Olchanski, Info, switch midas to c++17
|
> Following discussions at the MIDAS workshop, we propose to move MIDAS from c++11 to c++17.
There were no objections to this move.
There was one suggestion to use std::format available in c++20 (MIDAS has very similar msprintf()).
We shall move forward with this change.
K.O. |
20 Nov 2025, Konstantin Olchanski, Info, switch midas to c++17
|
> > Following discussions at the MIDAS workshop, we propose to move MIDAS from c++11 to c++17.
> We shall move forward with this change.
It is done. Last c++11 MIDAS is midas-2025-11-a (plus the db_delete_key merge).
I notice the cmake does not actually pass "-std=c++17" to the c++ compiler, and on U-20, it is likely
the default c++14 is used. cmake always does the wrong thing and this will need to be fixed later.
K.O. |
20 Nov 2025, Nick Hastings, Info, switch midas to c++17
|
> I notice the cmake does not actually pass "-std=c++17" to the c++ compiler, and on U-20, it is likely
> the default c++14 is used. cmake always does the wrong thing and this will need to be fixed later.
Does adding
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
Get it to do the right thing?
Cheers,
Nick.. |
25 Nov 2025, Konstantin Olchanski, Info, switch midas to c++17
|
>
> > I notice the cmake does not actually pass "-std=c++17" to the c++ compiler, and on U-20, it is likely
> > the default c++14 is used. cmake always does the wrong thing and this will need to be fixed later.
>
> set(CMAKE_CXX_STANDARD 17)
> set(CMAKE_CXX_STANDARD_REQUIRED ON)
>
We used to have this, it is not there now.
>
> Get it to do the right thing?
>
Unlikely, as Stefan reported, asking for C++17 yields -std=gnu++17 which is close enough, but not the same
thing.
For now, it does not matter, U-22 and U-24 are c++17 by default, if somebody accidentally commits c++20
code, builds will fail and somebody will catch it and complain, plus the weekly bitbucket build will bomb-
out.
On U-20, default is c++14 and builds will start bombing out as soon as we commit some c++17 code.
el7 builds have not worked for some time now (a bizarre mysterious error)
el8, el9, el10 likely same situation as Ubuntu.
macos, not sure.
K.O. |
24 Nov 2025, Stefan Ritt, Info, switch midas to c++17
|
> > > Following discussions at the MIDAS workshop, we propose to move MIDAS from c++11 to c++17.
> > We shall move forward with this change.
>
> It is done. Last c++11 MIDAS is midas-2025-11-a (plus the db_delete_key merge).
>
> I notice the cmake does not actually pass "-std=c++17" to the c++ compiler, and on U-20, it is likely
> the default c++14 is used. cmake always does the wrong thing and this will need to be fixed later.
>
> K.O.
We should either use
set(CMAKE_CSS_STANDARD 17)
or
target_compile_features(<target> PUBLIC cxx_std_17)
but not mix both. We have already the second one for the midas library, like
target_compile_features(objlib PUBLIC cxx_std_17)
which correctly causes a
c++ -std=gnu++17 ...
(at leas in my case).
If the compiler flag is missing for a target, we should add the target_compile_feature above for that target.
Stefan |
25 Nov 2025, Konstantin Olchanski, Info, switch midas to c++17
|
> target_compile_features(<target> PUBLIC cxx_std_17)
> which correctly causes a
> c++ -std=gnu++17 ...
I think this is set in a couple of places, yet I do not see any -std=xxx flag passed to the compiler.
(and I am not keen on spending a full day fighting cmake)
(btw, -std=c++17 and -std=gnu++17 are not the same thing, I am not sure how well GNU extensions are supported on
macos)
K.O. |
26 Nov 2025, Stefan Ritt, Info, switch midas to c++17
|
I switched from
target_compile_features(<target> PUBLIC css_std_17)
to
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF) # optional: disables GNU extensions
Which is now global in the CMakeLists.txt, so we only have to deal with one location if we want to change it. It also turns off the g++ options. On my
Mac I get now a clean
-std=c++17
Please everybody test on your side. Change is committed.
Stefan |
27 Nov 2025, Konstantin Olchanski, Info, switch midas to c++17
|
>
> set(CMAKE_CXX_STANDARD 17)
> set(CMAKE_CXX_STANDARD_REQUIRED ON)
> set(CMAKE_CXX_EXTENSIONS OFF) # optional: disables GNU extensions
>
Looks like it works, I see -std=c++17 everywhere. Added same to manalyzer and mscb (mscb was still c++11).
Build on U-20 works (g++ accepts -std=c++17), build on CentOS-7 bombs, cmake 3.17.5 does not know CXX17.
K.O. |
26 Nov 2025, Lars Martin, Bug Report, Error(?) in custom page documentation
|
https://daq00.triumf.ca/MidasWiki/index.php/Custom_Page#modb
says that
If the ODB path does not point to an individual value but to a subdirectory, the
whole subdirectory is mapped to this.value as a JavaSctipt object such as
<div class="modb" data-odb-path="/Runinfo" onchange="func(this.value)">
<script>function func(value) { console.log(value["run number"]); }</script>
In fact, it seems to return the JSON string of said object, so you'd have to write
console.log(JSON.parse(value)["run number"]) |
27 Nov 2025, Stefan Ritt, Bug Report, Error(?) in custom page documentation
|
Indeed a bug. Fixed in commit
https://bitbucket.org/tmidas/midas/commits/5c1133df073f493d74d1fc4c03fbcfe80a3edae4
Stefan |
27 Nov 2025, Zaher Salman, Bug Report, Error(?) in custom page documentation
|
This commit breaks the sequencer pages...
> Indeed a bug. Fixed in commit
>
> https://bitbucket.org/tmidas/midas/commits/5c1133df073f493d74d1fc4c03fbcfe80a3edae4
>
> Stefan |
27 Nov 2025, Konstantin Olchanski, Bug Report, Error(?) in custom page documentation
|
the double-decode bug strikes again!
> This commit breaks the sequencer pages...
>
> > Indeed a bug. Fixed in commit
> >
> > https://bitbucket.org/tmidas/midas/commits/5c1133df073f493d74d1fc4c03fbcfe80a3edae4
> >
> > Stefan |
19 May 2025, Jonas A. Krieger, Suggestion, manalyzer root output file with custom filename including run number
|
Hi all,
Would it be possible to extend manalyzer to support custom .root file names that include the run number?
As far as I understand, the current behavior is as follows:
The default filename is ./root_output_files/output%05d.root , which can be customized by the following two command line arguments.
-Doutputdirectory: Specify output root file directory
-Ooutputfile.root: Specify output root file filename
If an output file name is specified with -O, -D is ignored, so the full path should be provided to -O.
I am aiming to write files where the filename contains sufficient information to be unique (e.g., experiment, year, and run number). However, if I specify it with -O, this would require restarting manalyzer after every run; a scenario that I would like to avoid if possible.
Please find a suggestion of how manalyzer could be extended to introduce this functionality through an additional command line argument at
https://bitbucket.org/krieger_j/manalyzer/commits/24f25bc8fe3f066ac1dc576349eabf04d174deec
Above code would allow the following call syntax: ' ./manalyzer.exe -O/data/experiment1_%06d.root --OutputNumbered '
But note that as is, it would fail if a user specifies an incompatible format such as -Ooutput%s.root .
So a safer, but less flexible option might be to instead have the user provide only a prefix, and then attach %05d.root in the code.
Thank you for considering these suggestions! |
12 Nov 2025, Jonas A. Krieger, Suggestion, manalyzer root output file with custom filename including run number
|
Hi all,
Could you please get back to me about whether something like my earlier suggestion might be considered, or if I should set up some workaround to rename files at EOR for our experiments?
https://daq00.triumf.ca/elog-midas/Midas/3042 :
-----------------------------------------------
> Hi all,
>
> Would it be possible to extend manalyzer to support custom .root file names that include the run number?
>
> As far as I understand, the current behavior is as follows:
> The default filename is ./root_output_files/output%05d.root , which can be customized by the following two command line arguments.
>
> -Doutputdirectory: Specify output root file directory
> -Ooutputfile.root: Specify output root file filename
>
> If an output file name is specified with -O, -D is ignored, so the full path should be provided to -O.
>
> I am aiming to write files where the filename contains sufficient information to be unique (e.g., experiment, year, and run number). However, if I specify it with -O, this would require restarting manalyzer after every run; a scenario that I would like to avoid if possible.
>
> Please find a suggestion of how manalyzer could be extended to introduce this functionality through an additional command line argument at
> https://bitbucket.org/krieger_j/manalyzer/commits/24f25bc8fe3f066ac1dc576349eabf04d174deec
>
> Above code would allow the following call syntax: ' ./manalyzer.exe -O/data/experiment1_%06d.root --OutputNumbered '
> But note that as is, it would fail if a user specifies an incompatible format such as -Ooutput%s.root .
>
> So a safer, but less flexible option might be to instead have the user provide only a prefix, and then attach %05d.root in the code.
>
> Thank you for considering these suggestions! |
25 Nov 2025, Konstantin Olchanski, Suggestion, manalyzer root output file with custom filename including run number
|
Hi, Jonas, thank you for reminding me about this. I hope to work on manalyzer in the next few weeks and I will review the ROOT output file name scheme.
K.O.
> Hi all,
>
> Could you please get back to me about whether something like my earlier suggestion might be considered, or if I should set up some workaround to rename files at EOR for our experiments?
>
> https://daq00.triumf.ca/elog-midas/Midas/3042 :
> -----------------------------------------------
> > Hi all,
> >
> > Would it be possible to extend manalyzer to support custom .root file names that include the run number?
> >
> > As far as I understand, the current behavior is as follows:
> > The default filename is ./root_output_files/output%05d.root , which can be customized by the following two command line arguments.
> >
> > -Doutputdirectory: Specify output root file directory
> > -Ooutputfile.root: Specify output root file filename
> >
> > If an output file name is specified with -O, -D is ignored, so the full path should be provided to -O.
> >
> > I am aiming to write files where the filename contains sufficient information to be unique (e.g., experiment, year, and run number). However, if I specify it with -O, this would require restarting manalyzer after every run; a scenario that I would like to avoid if possible.
> >
> > Please find a suggestion of how manalyzer could be extended to introduce this functionality through an additional command line argument at
> > https://bitbucket.org/krieger_j/manalyzer/commits/24f25bc8fe3f066ac1dc576349eabf04d174deec
> >
> > Above code would allow the following call syntax: ' ./manalyzer.exe -O/data/experiment1_%06d.root --OutputNumbered '
> > But note that as is, it would fail if a user specifies an incompatible format such as -Ooutput%s.root .
> >
> > So a safer, but less flexible option might be to instead have the user provide only a prefix, and then attach %05d.root in the code.
> >
> > Thank you for considering these suggestions! |
21 Nov 2025, Scott Oser, Bug Report, Cannot edit values in a subtree containing only a single array of BOOLs using the ODB web interface
|
I think I've found a bug in MIDAS ...
Description: If you have an ODB subtree that contains only an array of BOOLs, you cannot edit them from the ODB webpage, although you can change them using odbedit (and probably from code as well).
(If you use the dropdown menu to change any value from No to Yes, it just flips back to No immediately.)
But if you create a new key in that directory (doesn't seem to matter what), then you can edit the BOOLs from webpage. Delete that key, and once again you can't edit the BOOLs. |
24 Nov 2025, Stefan Ritt, Bug Report, Cannot edit values in a subtree containing only a single array of BOOLs using the ODB web interface
|
Can you please update to the latest develop versiokn of midas, and clear your browser cache so that the updated JavaScript midas library is loaded. Should be fixed by now. See attached screen shot where I changed every second value via the ODB editor.
Stefan
|
24 Nov 2025, Scott Oser, Bug Report, Cannot edit values in a subtree containing only a single array of BOOLs using the ODB web interface
|
| Stefan Ritt wrote: |
|
Can you please update to the latest develop versiokn of midas, and clear your browser cache so that the updated JavaScript midas library is loaded. Should be fixed by now. See attached screen shot where I changed every second value via the ODB editor.
Stefan
|
Thanks --- it looks like this commit (which we just missed by four days when we last updated MIDAS) resolves the issue for us:
https://bitbucket.org/tmidas/midas/commits/6af72c1d218798064a7762bae6e65ad3407de9d1
Thanks to Ben Smith for pointing us at exactly the right commit. |
25 Nov 2025, Konstantin Olchanski, Bug Report, Cannot edit values in a subtree containing only a single array of BOOLs using the ODB web interface
|
> Thanks --- it looks like this commit resolves the issue for us ...
> Thanks to Ben Smith for pointing us at exactly the right commit
I would like to take the opportunity to encourage all to report bug fixes like this one to this mailing list.
This looks like a serious bug, many midas users would like to know when it was introduced, when found, when fixed
and who takes the credit.
K.O. |
20 Nov 2025, Konstantin Olchanski, Bug Fix, ODB update, branch feature/db_delete_key merged into develop
|
In the darkside vertical slice midas daq, we observed odb corruption which I
traced to db_delete_key(). cause of corruption is not important. important is to
have a robust odb where small corruption will stay localized and will not
require erasing corrupt odb and reloading it from a backup file.
To help debug such corruption one can try to set ODB "/Experiment/Protect ODB"
to "yes". This will make ODB shared memory read-only and user code scribbling
into the wrong memory address will cause a seg fault and core dump instead of
silent ODB corruption. This feature is not enabled by default because changing
ODB shared memory mapping from "read-only" to "writable" (and back) is not very
fast and it slows down MIDAS noticably.
MIDAS right before this merge was tagged "midas-2025-11-a", if you see this ODB
update cause trouble, please report it here and revert to this tagged version.
Updates:
- harden db_delete_key() against internal corruption, if odb inconsistency is
detected, do a clean crash instead of trying to delete stuff and corrupting odb
to the point where it has to be erased and reloaded from a backup file.
- additional refactoring to separate read-locked and write-locked code.
- merge of missing patch to avoid odb corruption when key area becomes 100% full
(or was it the data area? I forget now, I fixed one of them long time ago, now
both are fixed).
- remove the "follow_links" argument from db_delete_key(), see separate
discussion on this.
- add db_delete() to delete things by ODB path not by hkey (atomic fused
together db_find_link() and db_delete_key()).
- fixes for incorrect use of db_find_key() and db_delete_key(), this
unexpectedly follows symlinks and deletes the wrong ODB entry. (should have been
db_find_link(), now replaced with atomic db_delete()).
K.O. |
25 Nov 2025, Stefan Ritt, Bug Fix, ODB update, branch feature/db_delete_key merged into develop
|
Thanks for the fixes, which I all approve.
There is still a "follow_links" in midas_c_compat.h line 70 for Python. Probably Ben has to look into that. Also
client.py has it.
Stefan |
25 Nov 2025, Konstantin Olchanski, Bug Fix, ODB update, branch feature/db_delete_key merged into develop
|
> Thanks for the fixes, which I all approve.
>
> There is still a "follow_links" in midas_c_compat.h line 70 for Python. Probably Ben has to look into that. Also
> client.py has it.
Correct, Ben will look at this on the python side.
And I will be updating mvodb soon and fix it there.
K.O. |
25 Nov 2025, Konstantin Olchanski, Bug Fix, fixed db_find_keys()
|
Function db_find_keys() added by person unnamed in April 2020 never worked correctly, it is now fixed,
repaired, also unsafe strcpy() replaced by mstrlcpy().
This function is used by msequencer ODBSet function and by odbedit "set" command.
Under all conditions it returned DB_NO_KEYS, only two use cases actually worked:
set runinfo/state 1 <--- no match pattern - works
set run*/state 1 <--- match multiple subdirectories - works
set runinfo/stat* 1 <--- bombs out with DB_NO_KEY
set run*/stat* 1 <--- bombs out with DB_NO_KEY
All four use cases now work.
commit b5b151c9bc174ca5fd71561f61b4288c40924a1a
K.O. |
06 Nov 2025, Konstantin Olchanski, Info, make clang-tidy
|
I added Makefile rules for running MIDAS through the clang-tidy static code
analyzer. (rules for running cppcheck have gone missing, I hope I find them).
Reports from clang-tidy are somewhat repetitive (complain about the same class of
non-problem problems repeatedly), but several warnings and errors identified real
bugs.
Specifically, detection of memory leaks in error paths and detection of NULL
pointer dereference and use of uninitialized variables is pretty solid.
Fixes for the worst problems reported by cppcheck and clang-tidy are already
committed to midas git.
K.O. |
21 Nov 2025, Konstantin Olchanski, Info, cppcheck
|
> (rules for running cppcheck have gone missing, I hope I find them).
found them. I built cppcheck from sources.
520 ~/git/cppcheck/build/bin/cppcheck src/midas.cxx
523 ~/git/cppcheck/build/bin/cppcheck manalyzer/manalyzer.cxx manalyzer/mjsroot.cxx
524 ~/git/cppcheck/build/bin/cppcheck src/tmfe.cxx
525 ~/git/cppcheck/build/bin/cppcheck midasio/*.cxx
526 ~/git/cppcheck/build/bin/cppcheck mjson/*.cxx
K.O. |
22 Sep 2025, Konstantin Olchanski, Info, removal of ROOT support in mlogger
|
Historically, building MIDAS with ROOT caused us many problems - build failures because of c++ version
mismatch, CFLAGS mismatch; run-time failures because of ROOT library mismatches; etc, etc.
Following discussions at the MIDAS Workshop, we think we should finally bite the bullet and remove ROOT
support from MIDAS:
- remove support for writing data in ROOT TTree format in mlogger (rmlogger)
- remove support for ROOT in mana.c based analyzer (which itself we propose to remove)
- remove ROOT helper functions in rmidas.h
- remove ROOT support in cmake
- remove rmlogger and rmana
This change will not affect the rootana and manalyzer packages, they will continue to be built with
ROOT support if ROOT is available.
Right now, we cannot remember any experiment that uses the ROOT TTree output function in mlogger.
If you use this feature, we very much would like to hear from you. Please contact Stefan or myself or
reply to this message.
As replacement for rmlogger, we could implement identical or similar functionality using an manalyzer-
based custom logger, but we need at least one user who can test it for us and confirm that it works
correctly.
K.O. |
06 Nov 2025, Konstantin Olchanski, Info, removal of ROOT support in mlogger
|
> we should finally bite the bullet and remove ROOT support from MIDAS ...
there were no objections to removing ROOT support from mlogger.
there was a request to keep rmana.
We shall move forward with this:
- remove all the HAVE_ROOT code from mlogger.cxx
- for now, keep HAVE_ROOT in CMakefile (HAVE_ROOT was removed from Makefile a long time ago)
- for now, keep rmana
- look into moving rmana to a separate package or a separate subdirectory and keep maximum compatibility
with existing make files.
K.O.
> Historically, building MIDAS with ROOT caused us many problems - build failures because of c++ version
> mismatch, CFLAGS mismatch; run-time failures because of ROOT library mismatches; etc, etc.
>
> Following discussions at the MIDAS Workshop, we think we should finally bite the bullet and remove ROOT
> support from MIDAS:
>
> - remove support for writing data in ROOT TTree format in mlogger (rmlogger)
> - remove support for ROOT in mana.c based analyzer (which itself we propose to remove)
> - remove ROOT helper functions in rmidas.h
> - remove ROOT support in cmake
> - remove rmlogger and rmana
>
> This change will not affect the rootana and manalyzer packages, they will continue to be built with
> ROOT support if ROOT is available.
>
> Right now, we cannot remember any experiment that uses the ROOT TTree output function in mlogger.
>
> If you use this feature, we very much would like to hear from you. Please contact Stefan or myself or
> reply to this message.
>
> As replacement for rmlogger, we could implement identical or similar functionality using an manalyzer-
> based custom logger, but we need at least one user who can test it for us and confirm that it works
> correctly.
>
> K.O. |
20 Nov 2025, Konstantin Olchanski, Info, removal of ROOT support in mlogger
|
> > we should finally bite the bullet and remove ROOT support from MIDAS ...
as discussed, HAVE_ROOT and OBSOLETE were removed from mlogger. rmana and ROOT support in manalyzed remain,
untouched.
last rmlogger is in MIDAS tagged midas-2025-11-a.
K.O. |
05 May 2025, Konstantin Olchanski, Info, db_delete_key(TRUE)
|
I was working on an odb corruption crash inside db_delete_key() and I noticed
that I did not test db_delete_key() with follow_links set to TRUE. Then I noticed
that nobody nowhere seems to use db_delete_key() with follow_links set to TRUE.
Instead of testing it, can I just remove it?
This feature existed since day 1 (1st commit) and it does something unexpected
compared to filesystem "/bin/rm": the best I can tell, it is removes the link
*and* whatever the link points to. For people familiar with "/bin/rm", this is
somewhat unexpected and by my thinking, if nobody ever added such a feature to
"/bin/rm", it is probably not considered generally useful or desirable. (I would
think it dangerous, it removes not 1 but 2 files, the 2nd file would be in some
other directory far away from where we are).
By this thinking, I should remove "follow_links" (actually just make it do thing
, to reduce the disturbance to other source code). db_delete_key() should work
similar to /bin/rm aka the unlink() syscall.
K.O. |
05 May 2025, Stefan Ritt, Info, db_delete_key(TRUE)
|
I would handle this actually like symbolic links are handled under linux. If you delete a symbolic link, the link gets
detected and NOT the file the link is pointing to.
So I conclude that the "follow links" is a misconception and should be removed.
Stefan |
10 Nov 2025, Konstantin Olchanski, Info, db_delete_key(TRUE)
|
> I would handle this actually like symbolic links are handled under linux. If you delete a symbolic link, the link gets
> detected and NOT the file the link is pointing to.
>
> So I conclude that the "follow links" is a misconception and should be removed.
I am finally writing test code for the ODB API, partially to discover what db_delete_key(TRUE) actually does,
because when I went ahead to remove it, I found that it's use is inconsistent. And indeed
it does strange and unexpected things, I will proceed with removing it.
K.O. |
20 Nov 2025, Konstantin Olchanski, Info, db_delete_key(TRUE)
|
> > I would handle this actually like symbolic links are handled under linux. If you delete a symbolic link, the link gets
> > detected and NOT the file the link is pointing to.
> >
> > So I conclude that the "follow links" is a misconception and should be removed.
>
> I am finally writing test code for the ODB API, partially to discover what db_delete_key(TRUE) actually does,
> because when I went ahead to remove it, I found that it's use is inconsistent. And indeed
> it does strange and unexpected things, I will proceed with removing it.
>
this is now merged into develop. there will be deprecation warnings from mvodb and from midas_c_compat, I and Ben will clean
them up, just not today.
for more detail, see separate message.
K.O. |
18 Nov 2025, Lars Martin, Bug Report, TMFeEquipment fEqConfReadOn not written to ODB
|
I'm constructing a TMFeEquipment with this constructor:
MagnetFe(const char *eqname, const char *eqfilename) // ctor
: TMFeEquipment(eqname, eqfilename)
{
fEqConfEventID = 3;
fEqConfBuffer = "SYSTEM";
fEqConfPeriodMilliSec = 1000; // in milliseconds
fEqConfLogHistory = 1;
fEqConfReadOn = RO_ALWAYS;
}
When I start with a fresh ODB, the directories are created correctly, and e.g. the
event ID is set correctly, but "Read on" is set to 1 (i.e. RO_RUNNING) instead of
0xFF.
Now when I set it to 0xFF manually and restart, it gets overwritten to 7
(RO_NONTRANS), which I guess is a relatively recent change and doesn't affect me
negatively. |
06 Nov 2025, Konstantin Olchanski, Bug Report, broken scroll on midas web pages
|
midas web pages that use overlays (dlgPanel, etc) are currently broken - if
overlay does not fit in the visible window, it's bottom is truncated and control
buttons like "create" and "cancel" are not visible, not clickable, page does not
work.
when these pages were originally written, I am pretty sure these overlays were
scrollable and this problem did not exist. I think that was broken recently,
maybe withint the last year or so.
specific examples:
a) odb editor:
- open odb editor,
- click on "create odb link"
- click on "link target ...", a dialog overlay opens with a list of odb keys in
the current directory
- select a directory with a large number of entries (i.e. "/Programs")
- alternatively, make browser window smaller
- observe the "ok" and "cancel" buttons are not visible, cannot be clicked
- definitely, there used be a scroll bar and one could scroll down to see these
buttons.
b) history planel editor:
- open history plot,
- click on "configure this plot" icon,
- history editor opens,
- click "add active variables"
- select active event that has many variables
- observe that the list is cut off at the bottom, the very last variables are
not visible
- alternatively, make the browser window smaller
I wrote this page and at the time this problem did not exist, there was a scroll
bar and one could scroll up and down the list even if there were really many
variables there.
Maybe this breakage is not from us, I see similar problems on other sites, so
maybe browser behaviour changed recentlyshly.
I think Stefan write the dlgPanel code originally? I am not very familiar with
it and I do not know if anybody changed it recently?
K.O. |
13 Nov 2025, Stefan Ritt, Bug Report, broken scroll on midas web pages
|
I confirm the problem is there (at least under MacOSX Safari) and I will take care of it.
Stefan |
14 Nov 2025, Stefan Ritt, Bug Report, broken scroll on midas web pages
|
This problem was introduced by ZS in March 2023 with these commits:
https://bitbucket.org/tmidas/midas/commits/25b13f875ff1f7e2f4e987273c81d6356dd2ff53
https://bitbucket.org/tmidas/midas/commits/2a9e902e07156e12edecb5c2257e4dbd944f8377
by setting
d.style.position = "fixed";
which prevents the scrolling. I have no idea why this change was made, so it should be fixed by the original
author.
Stefan |
16 Nov 2025, Zaher Salman, Bug Report, broken scroll on midas web pages
|
Sorry about that. I could not figure out what was the reason for doing this. This was during the time I was working on the file_picker. I removed these lines and see no effect on the file_picker. I'll continue checking it affect anything else.
Zaher
> This problem was introduced by ZS in March 2023 with these commits:
>
> https://bitbucket.org/tmidas/midas/commits/25b13f875ff1f7e2f4e987273c81d6356dd2ff53
> https://bitbucket.org/tmidas/midas/commits/2a9e902e07156e12edecb5c2257e4dbd944f8377
>
> by setting
>
> d.style.position = "fixed";
>
> which prevents the scrolling. I have no idea why this change was made, so it should be fixed by the original
> author.
>
> Stefan |
17 Nov 2025, Konstantin Olchanski, Bug Report, broken scroll on midas web pages
|
> Sorry about that. I could not figure out what was the reason for doing this. This was during the time I was working on the file_picker. I removed these lines and see no effect on the file_picker. I'll continue checking it affect anything else.
I confirm reported problem seems to be fixed in commit:
https://bitbucket.org/tmidas/midas/commits/7f2690b478d6dfb16b48fc98955093e6369b04c1
Big thanks to Stefan and Zaher for figuring it out quickly.
K.O. |
27 Oct 2025, Giovanni Mazzitelli, Suggestion, Python sc_frontend.py Display and History variables
|
We would like to write an sc_frontend in Python instead of C++. All our drivers
work correctly, as well as the creation of the database in the ODB, including the
creation of Commons, Statistics, Variables, and Settings.
However, we are unable to correctly create the database entries needed to manage
History and Display.
As we understand it, in C++ this is handled by setting the EQ_SLOW flag, which
doesn’t seem to be implemented in the Python libraries.
How can we manually create the necessary variables for Display and History? |
13 Nov 2025, Stefan Ritt, Suggestion, Python sc_frontend.py Display and History variables
|
> We would like to write an sc_frontend in Python instead of C++. All our drivers
> work correctly, as well as the creation of the database in the ODB, including the
> creation of Commons, Statistics, Variables, and Settings.
> However, we are unable to correctly create the database entries needed to manage
> History and Display.
>
> As we understand it, in C++ this is handled by setting the EQ_SLOW flag, which
> doesn’t seem to be implemented in the Python libraries.
> How can we manually create the necessary variables for Display and History?
I'm not an expert of the Python part of MIDAS (Ben Smith is), but I know that you have
functions to create keys and set values in the ODB, so you should be able to create
these things manually as you need them.
Stefan |
13 Nov 2025, Ben Smith, Suggestion, Python sc_frontend.py Display and History variables
|
> > We would like to write an sc_frontend in Python instead of C++. All our drivers
> > work correctly, as well as the creation of the database in the ODB, including the
> > creation of Commons, Statistics, Variables, and Settings.
> > However, we are unable to correctly create the database entries needed to manage
> > History and Display.
> >
> > As we understand it, in C++ this is handled by setting the EQ_SLOW flag, which
> > doesn’t seem to be implemented in the Python libraries.
> > How can we manually create the necessary variables for Display and History?
I don't believe any of this is handled automatically by the EQ_SLOW flag in the C++ code. I think you always have to manually create the history plots, normally using the webpage interface.
There is also a function in the python code called "client.hist_create_plot(group_name, panel_name, variables, labels=[])" that can slightly automate this, though you do have to know what midas is internally calling your variables.
You can find out what the variables are called either through the webpage interface when creating a plot, or via the python script at $MIDASSYS/python/examples/basic_hist_script.py |
01 Oct 2025, Frederik Wauters, Forum, struct size mismatch of alarms
|
So I started our DAQ with an updated midas, after ca. 6 months+.
No issues except all FEs complaining about the Alarm ODB structure.
* I adapted to the new structure ( trigger count & trigger count required )
* restarted fe's
* recompiled
18:17:40.015 2025/09/30 [EPICS Frontend,INFO] Fixing ODB "/Alarms/Alarms/logger"
struct size mismatch (expected 452, odb size 460)
18:17:40.009 2025/09/30 [SC Frontend,INFO] Fixing ODB "/Alarms/Alarms/logger"
struct size mismatch (expected 460, odb size 452)
how do I get the FEs + ODB back in line here?
thanks |
01 Oct 2025, Nick Hastings, Forum, struct size mismatch of alarms
|
> So I started our DAQ with an updated midas, after ca. 6 months+.
Would be worthwhile mentioning the git commit hash or tag you are using.
> No issues except all FEs complaining about the Alarm ODB structure.
> * I adapted to the new structure ( trigger count & trigger count required )
> * restarted fe's
> * recompiled
>
> 18:17:40.015 2025/09/30 [EPICS Frontend,INFO] Fixing ODB "/Alarms/Alarms/logger"
> struct size mismatch (expected 452, odb size 460)
>
> 18:17:40.009 2025/09/30 [SC Frontend,INFO] Fixing ODB "/Alarms/Alarms/logger"
> struct size mismatch (expected 460, odb size 452)
This seems to be https://daq00.triumf.ca/elog-midas/Midas/2980
> how do I get the FEs + ODB back in line here?
Recompile all frontends against new midas.
Nick. |
01 Oct 2025, Nick Hastings, Forum, struct size mismatch of alarms
|
Just to be clear, it seems that your "EPICS Frontend" was either not recompiled against the new midas yet or the old binary is being run, but "SC Frontend" is using the new midas.
> > So I started our DAQ with an updated midas, after ca. 6 months+.
>
> Would be worthwhile mentioning the git commit hash or tag you are using.
>
> > No issues except all FEs complaining about the Alarm ODB structure.
> > * I adapted to the new structure ( trigger count & trigger count required )
> > * restarted fe's
> > * recompiled
> >
> > 18:17:40.015 2025/09/30 [EPICS Frontend,INFO] Fixing ODB "/Alarms/Alarms/logger"
> > struct size mismatch (expected 452, odb size 460)
> >
> > 18:17:40.009 2025/09/30 [SC Frontend,INFO] Fixing ODB "/Alarms/Alarms/logger"
> > struct size mismatch (expected 460, odb size 452)
>
> This seems to be https://daq00.triumf.ca/elog-midas/Midas/2980
>
> > how do I get the FEs + ODB back in line here?
>
> Recompile all frontends against new midas.
>
> Nick. |
02 Oct 2025, Stefan Ritt, Forum, struct size mismatch of alarms
|
Sorry to intervene there, but the FEs are usually compiled against libmidas.a . Therefore you have to compile midas, usually do a "make install" to update the libmidas.a/so, then recompile the FEs. You probably forgot the "make install".
Stefan |
02 Oct 2025, Frederik Wauters, Forum, struct size mismatch of alarms
|
> Sorry to intervene there, but the FEs are usually compiled against libmidas.a . Therefore you have to compile midas, usually do a "make install" to update the libmidas.a/so, then recompile the FEs. You probably forgot the "make install".
>
> Stefan
OK, solved, closed. Turned out I messed up rebuilding some of the FEs.
More generally, and with the "Documentation" discussion of the workshop in mind, ODB mismatch error messages of all kind are a recurring phenomena confusing users. And MIDASGPT gave complete wrong suggestions. |
17 Sep 2025, Mark Grimes, Suggestion, Get manalyzer to configure midas::odb when running offline
|
Hi,
Lots of users like the midas::odb interface for reading from the ODB in manalyzers. It currently doesn't
work offline however without a few manual lines to tell midas::odb to read from the ODB copy in the run
header. The code also gets a bit messy to work out the current filename and get midas::odb to reopen the
file currently being processed. This would be much cleaner if manalyzer set this up automatically, and then
user code could be written that is completely ignorant of whether it is running online or offline.
The change I suggest is in the `set_offline_odb` branch, commit 4ffbda6, which is simply:
diff --git a/manalyzer.cxx b/manalyzer.cxx
index 371f135..725e1d2 100644
--- a/manalyzer.cxx
+++ b/manalyzer.cxx
@@ -15,6 +15,7 @@
#include "manalyzer.h"
#include "midasio.h"
+#include "odbxx.h"
//////////////////////////////////////////////////////////
@@ -2075,6 +2076,8 @@ static int ProcessMidasFiles(const std::vector<std::string>& files, const std::v
if (!run.fRunInfo) {
run.CreateRun(runno, filename.c_str());
run.fRunInfo->fOdb = MakeFileDumpOdb(event->GetEventData(), event->data_size);
+ // Also set the source for midas::odb in case people prefer that interface
+ midas::odb::set_odb_source(midas::odb::STRING, std::string(event->GetEventData(), event-
>data_size));
run.BeginRun();
}
It happens at the point where the ODB record is already available and requires no effort from the user to
be able to read the ODB offline.
Thanks,
Mark. |
17 Sep 2025, Konstantin Olchanski, Suggestion, Get manalyzer to configure midas::odb when running offline
|
> Lots of users like the midas::odb interface for reading from the ODB in manalyzers.
> +#include "odbxx.h"
This is a useful improvement. Before commit of this patch, can you confirm the RunInfo destructor
deletes this ODB stuff from odbxx? manalyzer takes object life times very seriously.
There is also the issue that two different RunInfo objects would load two different ODB dumps
into odbxx. (inability to access more than 1 ODB dump is a design feature of odbxx).
This is not an actual problem in manalyzer because it only processes one run at a time
and only 1 or 0 RunInfo objects exists at any given time.
Of course with this patch extending manalyzer to process two or more runs at the same time becomes impossible.
K.O. |
18 Sep 2025, Mark Grimes, Suggestion, Get manalyzer to configure midas::odb when running offline
|
> ....Before commit of this patch, can you confirm the RunInfo destructor
> deletes this ODB stuff from odbxx? manalyzer takes object life times very seriously.
The call stores the ODB string in static members of the midas::odb class. So these will have a lifetime of the process or until they're replaced by another
call. When a midas::odb is instantiated it reads from these static members and then that data has the lifetime of that instance.
> Of course with this patch extending manalyzer to process two or more runs at the same time becomes impossible.
Yes, I hadn't realised that was an option. For that to work I guess the aforementioned static members could be made thread local storage, and
processing of each run kept to a specific thread. Although I could imagine user code making assumptions and breaking, like storing a midas::odb as a
class member or something.
Note that I missed doing the same for the end of run event, which should probably also be added.
Thanks,
Mark. |
18 Sep 2025, Stefan Ritt, Suggestion, Get manalyzer to configure midas::odb when running offline
|
> > Of course with this patch extending manalyzer to process two or more runs at the same time becomes impossible.
>
> Yes, I hadn't realised that was an option. For that to work I guess the aforementioned static members could be made thread local storage, and
> processing of each run kept to a specific thread. Although I could imagine user code making assumptions and breaking, like storing a midas::odb as a
> class member or something.
If we want to analyze several runs, I can easily add code to make this possible. In a new call to set_odb_source(), the previously allocated memory in that function can be freed. We can aldo make the memory handling
thread-specific, allowing several thread to analyze different runs at the same time. But I will only invest work there once it's really needed by someone.
Stefan |
22 Sep 2025, Stefan Ritt, Suggestion, Get manalyzer to configure midas::odb when running offline
|
I will work today on the odbxx API to make sure there are no memory leaks when you switch form one file to another. I talked to KO so he agreed that yo then commit your proposed change of manalyzer
Best,
Stefan |
22 Sep 2025, Konstantin Olchanski, Suggestion, Get manalyzer to configure midas::odb when running offline
|
> I will work today on the odbxx API to make sure there are no memory leaks when you switch form one file to another. I talked to KO so he agreed that yo then commit your proposed change of manalyzer
That, and add a "clear()" method that resets odbxx state to "empty". I will call odbxx.clear() everywhere where I call "delete fOdb;" (TARunInfo::dtor and other places).
K.O. |
22 Sep 2025, Stefan Ritt, Suggestion, Get manalyzer to configure midas::odb when running offline
|
> > I will work today on the odbxx API to make sure there are no memory leaks when you switch form one file to another. I talked to KO so he agreed that yo then commit your proposed change of manalyzer
>
> That, and add a "clear()" method that resets odbxx state to "empty". I will call odbxx.clear() everywhere where I call "delete fOdb;" (TARunInfo::dtor and other places).
No need for clear(), since no memory gets allocated by midas::odd::set_odb_source(). All it does is to remember the file name. When you instantiate a midas::odd object, the file gets loaded, and the midas::odd object gets initialized from the file contents. Then the buffer
gets deleted (actually it's a simple local variable). Of course this causes some overhead (each midas::odd() constructor reads the whole file), but since the OS will cache the file, it's probably not so bad.
Stefan |
26 Sep 2025, Mark Grimes, Suggestion, Get manalyzer to configure midas::odb when running offline
|
> ...I talked to KO so he agreed that yo then commit your proposed change of manalyzer
Merged and pushed.
Thanks,
Mark. |
22 Sep 2025, Konstantin Olchanski, Suggestion, Get manalyzer to configure midas::odb when running offline
|
> > ....Before commit of this patch, can you confirm the RunInfo destructor
> > deletes this ODB stuff from odbxx? manalyzer takes object life times very seriously.
>
> The call stores the ODB string in static members of the midas::odb class. So these will have a lifetime of the process or until they're replaced by another
> call. When a midas::odb is instantiated it reads from these static members and then that data has the lifetime of that instance.
this is the behavious we need to modify.
> > Of course with this patch extending manalyzer to process two or more runs at the same time becomes impossible.
> Yes, I hadn't realised that was an option.
It is an option I would like to keep open. Not too many use cases, but imagine a "split brain" experiment
that has two MIDAS instances record data into two separate midas files. (if LIGO were to use MIDAS,
consider LIGO Hanford and LIGO Livingston).
Assuming data in these two data sets have common precision timestamps,
our task is to assemble data from two input files into single physics events. The analyzer will need
to read two input files, each file with it's run number, it's own ODB dump, etc, process the midas
events (unpack, calibrate, filter, etc), look at the timestamps, assemble the data into physics events.
This trivially generalizes into reading 2, 3, or more input files.
> For that to work I guess the aforementioned static members could be made thread local storage, and
> processing of each run kept to a specific thread. Although I could imagine user code making assumptions and breaking, like storing a midas::odb as a
> class member or something.
manalyzer is already multithreaded, if you will need to keep track of which thread should see which odbxx global object,
seems like abuse of the thread-local storage idea and intent.
> Note that I missed doing the same for the end of run event, which should probably also be added.
Ideally, the memory sanitizer will flag this for us, complain about anything that odbxx.clear() failes to free.
K.O. |
22 Sep 2025, Stefan Ritt, Suggestion, Get manalyzer to configure midas::odb when running offline
|
> > > Of course with this patch extending manalyzer to process two or more runs at the same time becomes impossible.
> > Yes, I hadn't realised that was an option.
>
> It is an option I would like to keep open. Not too many use cases, but imagine a "split brain" experiment
> that has two MIDAS instances record data into two separate midas files. (if LIGO were to use MIDAS,
> consider LIGO Hanford and LIGO Livingston).
>
> Assuming data in these two data sets have common precision timestamps,
> our task is to assemble data from two input files into single physics events. The analyzer will need
> to read two input files, each file with it's run number, it's own ODB dump, etc, process the midas
> events (unpack, calibrate, filter, etc), look at the timestamps, assemble the data into physics events.
>
> This trivially generalizes into reading 2, 3, or more input files.
>
> > For that to work I guess the aforementioned static members could be made thread local storage, and
> > processing of each run kept to a specific thread. Although I could imagine user code making assumptions and breaking, like storing a midas::odb as a
> > class member or something.
>
> manalyzer is already multithreaded, if you will need to keep track of which thread should see which odbxx global object,
> seems like abuse of the thread-local storage idea and intent.
I made the global variables storing the file name of type "thread_local", so each thread gets it's own copy. This means however that each thread must then call midas::odb::set_odb_source() individually before
creating any midas::odb objects. Interestingly enough I just learned that thread_local (at least under linux) is almost zero overhead, since these variable are placed by the linker into a separate memory space which is
separate for each thread, so accessing them only means to add a memory offset.
Let's see how far we get with this...
Stefan |
22 Sep 2025, Konstantin Olchanski, Info, obsolete mana.c removal
|
Following discussions at the MIDAS workshop and the proposed removal of support for ROOT, the very obsolete mana.c
analyzer framework has reached the end of the line.
Right now we cannot remember any experiment that uses a mana.c based analyzer. Most experiments use analyzers based
on the rootana package (developed for ALPHA-1 at CERN) and on the manalyzer package (developed for ALPHA-2 and ALPHA-
g at CERN, with multithreading support contributed by Joseph McKenna).
If you know of any experiment that uses a mana.c based analyzer, please let us know. We can help with building it
using an outside-of-midas local copy of mana.c or help with migration to a newer framework (or migration to a
framework-free standalone analyzer).
If we do not hear from anybody, we will remove mana.c (and rmana) at the same time as we remove ROOT support from
mlogger (rmlogger).
K.O. |
23 Sep 2025, Andreas Suter, Info, obsolete mana.c removal
|
Hi,
at the LEM Experiment at PSI, we still use mana.c and would like to keep it until end of 2026, where we will enter a long shutdown.
There we will switch to the manalyzer. Before that, there is simply no time for the change over. One thing I already noticed is the "lack" of documentation, since
for a lot of items I found simply "TBW". I know that writing documentation is boring and hard, but I hope at the time we will switch there is a more complete
documentation available.
Thanks a lot for the ongoing development and support
Andreas
> Following discussions at the MIDAS workshop and the proposed removal of support for ROOT, the very obsolete mana.c
> analyzer framework has reached the end of the line.
>
> Right now we cannot remember any experiment that uses a mana.c based analyzer. Most experiments use analyzers based
> on the rootana package (developed for ALPHA-1 at CERN) and on the manalyzer package (developed for ALPHA-2 and ALPHA-
> g at CERN, with multithreading support contributed by Joseph McKenna).
>
> If you know of any experiment that uses a mana.c based analyzer, please let us know. We can help with building it
> using an outside-of-midas local copy of mana.c or help with migration to a newer framework (or migration to a
> framework-free standalone analyzer).
>
> If we do not hear from anybody, we will remove mana.c (and rmana) at the same time as we remove ROOT support from
> mlogger (rmlogger).
>
> K.O. |
23 Sep 2025, Konstantin Olchanski, Info, obsolete mana.c removal
|
> Hi, at the LEM Experiment at PSI, we still use mana.c and would like to keep it until end of 2026, where we will enter a long shutdown.
Excellent, good to hear from you! Once we remove ROOT support rmana.o will be gone, only mana.o (no ROOT) will remain. Will this break your builds?
One solution could be to copy mana.c from MIDAS into your source tree and compile/link it from there (not from MIDAS).
Perhaps the way to proceed is create a test branch with ROOT and mana.c removed, you can try it,
report success/fail and we go from there.
We should schedule this work for when both of us have a block of free time to work on it.
K.O. |
24 Sep 2025, Andreas Suter, Info, obsolete mana.c removal
|
Sorry,
I have had now the time to dig deeper in our code and realized that we actually use rmana, i.e. WITH ROOT. If there is an easy way to incorporate the necessary parts temporarily to our side, we will do it. Without ROOT this would have been quite easy, with ROOT, I am not that sure. Anyhow, as said the timeline for this is only until end of 2026.
Andreas
> > Hi, at the LEM Experiment at PSI, we still use mana.c and would like to keep it until end of 2026, where we will enter a long shutdown.
>
> Excellent, good to hear from you! Once we remove ROOT support rmana.o will be gone, only mana.o (no ROOT) will remain. Will this break your builds?
>
> One solution could be to copy mana.c from MIDAS into your source tree and compile/link it from there (not from MIDAS).
>
> Perhaps the way to proceed is create a test branch with ROOT and mana.c removed, you can try it,
> report success/fail and we go from there.
>
> We should schedule this work for when both of us have a block of free time to work on it.
>
> K.O. |
23 Sep 2025, Konstantin Olchanski, Info, long history variable names
|
To record discussion with Stefan about long history variable names.
We have several requests to remove the 32-byte limit on history variable names.
Presently, history variable names are formed from two 32-byte strings: history event name and
history tag name:
* history event name is usually same as the equipment name (also a 32-byte string)
* history tag name is composed from /eq/xxx/variables/yyy name (also a 32-byte string) or from
names in /eq/xxx/variables/names and "names zzz", which can have arbitrary length (and tag name
would have to be truncated).
This worked well for "per-equipment" history, history events corresponded to equipment/variables
and all data from equipment/variables were written together in one go. (this very inefficient if
values of only one variable is updated).
Then at some point we implemented "per-variable" history:
* history event name is a equipment name (32-byte string) plus /eq/xxx/variables/vvv variable name
(also 32-byte string). (obviously truncation is quite possible)
* history tag name is unchanged (also can be truncated)
With "per-variable" history, history events correspond to individual variables (ODB entries) in
/eq/xxx/variables. If value of one variable is updated, only that variable is written to ODB. This
is much more efficient. (If variable is an array, the whole array is written, is variable is a
subdirectory, the whole subdirectory is written).
We considered even finer granularity, writing to history file only the one value that changed, but
decided against slicing the data too fine. (for arrays, MIDAS frontends usually update all values
of an array, as in "array of 10 temperatures" or "array of 4000 high voltages").
Many years later, we have the SQL history and the FILE history which do not have the 32-byte limit
on history event names and history tag names (no limit in the MIDAS C++ code. MySQL and PgSQL have
limits on table name and column name lengths, 64 bytes and 31 bytes respectively, best I can
tell).
But the API still uses the MIDAS "struct TAG" with the 32-byte tag name length limit.
It is pretty easy to change the API to use a new "struct TAG_CXX" with std::string unlimited-
length tag names, but the old MIDAS history will be unable to deal with long names. Hence
the discussion about removing the old MIDAS history and keeping only the FILE and SQL history
(plus the mhdump and mh2sql tools to convert old history files to the new formats).
(some code in mhttpd may need to be corrected for long history names. javascript code should be
okey, history plot code may need adjustment to display pathologically long names. use small font,
truncate, etc).
K.O. |
23 Sep 2025, Konstantin Olchanski, Info, 64-bit time_t
|
To record discussion with Stefan regarding 64-bit time_t
To remember:
signed 32-bit time_t will overflow in 2038 (soon enough)
unsigned 32-bit time_t will overflow in 2106 ("not my problem")
https://en.wikipedia.org/wiki/Year_2038_problem
https://wiki.debian.org/ReleaseGoals/64bit-time
64-bit Linux uses 64-bit time_t since as far back as el6.
MIDAS uses unsigned 32-bit (DWORD) time-in-seconds in many places (ODB, event
headers, event buffers, etc)
MIDAS also uses unsigned 32-bit (DWORD) time-in-milliseconds in many places.
All time arithmetic is done using unsigned 32-bit math, these time calculations
are good until year 2106, at which time they will wrap around
(but still work correctly).
So we do not need to do anything, but...
To reduce confusion between the different time types, we will probably
introduce a 32-bit-time-in-seconds data type (i.e. time32_t alias for uint32_t),
and a 32-bit-time-in-milliseconds data type (i.e. millitime32_t alias for
uint32_t). Also rename ss_time() to ss_time32() and ss_millitime() to
ss_millitime32().
This should help avoiding accidental mixing of MIDAS 32-bit time, system 64-bit
time and MIDAS 32-bit-time-in-milliseconds.
(confusion between time-in-seconds and time-in-milliseconds happened several
times in MIDAS code).
There will be additional discussion and announcements if we go ahead with these
changes.
K.O. |
|