| ID |
Date |
Author |
Topic |
Subject |
|
3081
|
22 Sep 2025 |
Stefan Ritt | Suggestion | Get manalyzer to configure midas::odb when running offline | I will work today on the odbxx API to make sure there are no memory leaks when you switch form one file to another. I talked to KO so he agreed that yo then commit your proposed change of manalyzer
Best,
Stefan |
|
3082
|
22 Sep 2025 |
Konstantin Olchanski | Suggestion | Get manalyzer to configure midas::odb when running offline | > I will work today on the odbxx API to make sure there are no memory leaks when you switch form one file to another. I talked to KO so he agreed that yo then commit your proposed change of manalyzer
That, and add a "clear()" method that resets odbxx state to "empty". I will call odbxx.clear() everywhere where I call "delete fOdb;" (TARunInfo::dtor and other places).
K.O. |
|
3083
|
22 Sep 2025 |
Konstantin Olchanski | Suggestion | Get manalyzer to configure midas::odb when running offline | > > ....Before commit of this patch, can you confirm the RunInfo destructor
> > deletes this ODB stuff from odbxx? manalyzer takes object life times very seriously.
>
> The call stores the ODB string in static members of the midas::odb class. So these will have a lifetime of the process or until they're replaced by another
> call. When a midas::odb is instantiated it reads from these static members and then that data has the lifetime of that instance.
this is the behavious we need to modify.
> > Of course with this patch extending manalyzer to process two or more runs at the same time becomes impossible.
> Yes, I hadn't realised that was an option.
It is an option I would like to keep open. Not too many use cases, but imagine a "split brain" experiment
that has two MIDAS instances record data into two separate midas files. (if LIGO were to use MIDAS,
consider LIGO Hanford and LIGO Livingston).
Assuming data in these two data sets have common precision timestamps,
our task is to assemble data from two input files into single physics events. The analyzer will need
to read two input files, each file with it's run number, it's own ODB dump, etc, process the midas
events (unpack, calibrate, filter, etc), look at the timestamps, assemble the data into physics events.
This trivially generalizes into reading 2, 3, or more input files.
> For that to work I guess the aforementioned static members could be made thread local storage, and
> processing of each run kept to a specific thread. Although I could imagine user code making assumptions and breaking, like storing a midas::odb as a
> class member or something.
manalyzer is already multithreaded, if you will need to keep track of which thread should see which odbxx global object,
seems like abuse of the thread-local storage idea and intent.
> Note that I missed doing the same for the end of run event, which should probably also be added.
Ideally, the memory sanitizer will flag this for us, complain about anything that odbxx.clear() failes to free.
K.O. |
|
3085
|
22 Sep 2025 |
Stefan Ritt | Suggestion | Get manalyzer to configure midas::odb when running offline | > > I will work today on the odbxx API to make sure there are no memory leaks when you switch form one file to another. I talked to KO so he agreed that yo then commit your proposed change of manalyzer
>
> That, and add a "clear()" method that resets odbxx state to "empty". I will call odbxx.clear() everywhere where I call "delete fOdb;" (TARunInfo::dtor and other places).
No need for clear(), since no memory gets allocated by midas::odd::set_odb_source(). All it does is to remember the file name. When you instantiate a midas::odd object, the file gets loaded, and the midas::odd object gets initialized from the file contents. Then the buffer
gets deleted (actually it's a simple local variable). Of course this causes some overhead (each midas::odd() constructor reads the whole file), but since the OS will cache the file, it's probably not so bad.
Stefan |
|
3086
|
22 Sep 2025 |
Stefan Ritt | Suggestion | Get manalyzer to configure midas::odb when running offline | > > > Of course with this patch extending manalyzer to process two or more runs at the same time becomes impossible.
> > Yes, I hadn't realised that was an option.
>
> It is an option I would like to keep open. Not too many use cases, but imagine a "split brain" experiment
> that has two MIDAS instances record data into two separate midas files. (if LIGO were to use MIDAS,
> consider LIGO Hanford and LIGO Livingston).
>
> Assuming data in these two data sets have common precision timestamps,
> our task is to assemble data from two input files into single physics events. The analyzer will need
> to read two input files, each file with it's run number, it's own ODB dump, etc, process the midas
> events (unpack, calibrate, filter, etc), look at the timestamps, assemble the data into physics events.
>
> This trivially generalizes into reading 2, 3, or more input files.
>
> > For that to work I guess the aforementioned static members could be made thread local storage, and
> > processing of each run kept to a specific thread. Although I could imagine user code making assumptions and breaking, like storing a midas::odb as a
> > class member or something.
>
> manalyzer is already multithreaded, if you will need to keep track of which thread should see which odbxx global object,
> seems like abuse of the thread-local storage idea and intent.
I made the global variables storing the file name of type "thread_local", so each thread gets it's own copy. This means however that each thread must then call midas::odb::set_odb_source() individually before
creating any midas::odb objects. Interestingly enough I just learned that thread_local (at least under linux) is almost zero overhead, since these variable are placed by the linker into a separate memory space which is
separate for each thread, so accessing them only means to add a memory offset.
Let's see how far we get with this...
Stefan |
|
3097
|
24 Sep 2025 |
Thomas Lindner | Suggestion | Improve process for adding new variables that can be shown in history plots | Documenting a discussion I had with Konstantin a while ago.
One aspect of the MIDAS history plotting I find frustrating is the sequence for adding a new history
variable and then plotting them. At least for me ,the sequence for a new variable is:
1) Add a new named variable to a MIDAS bank; compile the frontend and restart it; check that the new
variable is being displayed correctly in MIDAS equipment pages.
2) Stop and restart programs; usually I do
- stop and restart mlogger
- stop and restart mhttpd
- stop and restart mlogger
- stop and restart mhttpd
3) Start adding the new variable to the history plots.
My frustration is with step 2, where the logger and web server need to be restarted multiple times.
I think that only one of those programs actually needs to be restart twice, but I can never remember
which one, so I restart both programs twice just to be sure.
I don't entirely understand the sequence of what happens with these restarts so that the web server
becomes aware of what new variables are in the history system; so I can't make a well-motivated
suggestion.
Ideally it would be nice if step 2 only required restarting the mlogger and mhttpd automatically
became aware of what new variables were in the system.
But even just having a single restart of mlogger, then mhttpd would be an improvement on the current
situation and easier to explain to users.
Hopefully this would not be a huge amount of work. |
|
3099
|
26 Sep 2025 |
Mark Grimes | Suggestion | Get manalyzer to configure midas::odb when running offline | > ...I talked to KO so he agreed that yo then commit your proposed change of manalyzer
Merged and pushed.
Thanks,
Mark. |
|
Draft
|
26 Sep 2025 |
Konstantin Olchanski | Suggestion | Get manalyzer to configure midas::odb when running offline | > > ...I talked to KO so he agreed that yo then commit your proposed change of manalyzer
> Merged and pushed.
negative. as I already |
|
3106
|
27 Oct 2025 |
Giovanni Mazzitelli | Suggestion | Python sc_frontend.py Display and History variables | We would like to write an sc_frontend in Python instead of C++. All our drivers
work correctly, as well as the creation of the database in the ODB, including the
creation of Commons, Statistics, Variables, and Settings.
However, we are unable to correctly create the database entries needed to manage
History and Display.
As we understand it, in C++ this is handled by setting the EQ_SLOW flag, which
doesn’t seem to be implemented in the Python libraries.
How can we manually create the necessary variables for Display and History? |
|
3112
|
12 Nov 2025 |
Jonas A. Krieger | Suggestion | manalyzer root output file with custom filename including run number | Hi all,
Could you please get back to me about whether something like my earlier suggestion might be considered, or if I should set up some workaround to rename files at EOR for our experiments?
https://daq00.triumf.ca/elog-midas/Midas/3042 :
-----------------------------------------------
> Hi all,
>
> Would it be possible to extend manalyzer to support custom .root file names that include the run number?
>
> As far as I understand, the current behavior is as follows:
> The default filename is ./root_output_files/output%05d.root , which can be customized by the following two command line arguments.
>
> -Doutputdirectory: Specify output root file directory
> -Ooutputfile.root: Specify output root file filename
>
> If an output file name is specified with -O, -D is ignored, so the full path should be provided to -O.
>
> I am aiming to write files where the filename contains sufficient information to be unique (e.g., experiment, year, and run number). However, if I specify it with -O, this would require restarting manalyzer after every run; a scenario that I would like to avoid if possible.
>
> Please find a suggestion of how manalyzer could be extended to introduce this functionality through an additional command line argument at
> https://bitbucket.org/krieger_j/manalyzer/commits/24f25bc8fe3f066ac1dc576349eabf04d174deec
>
> Above code would allow the following call syntax: ' ./manalyzer.exe -O/data/experiment1_%06d.root --OutputNumbered '
> But note that as is, it would fail if a user specifies an incompatible format such as -Ooutput%s.root .
>
> So a safer, but less flexible option might be to instead have the user provide only a prefix, and then attach %05d.root in the code.
>
> Thank you for considering these suggestions! |
|
3114
|
13 Nov 2025 |
Stefan Ritt | Suggestion | Python sc_frontend.py Display and History variables | > We would like to write an sc_frontend in Python instead of C++. All our drivers
> work correctly, as well as the creation of the database in the ODB, including the
> creation of Commons, Statistics, Variables, and Settings.
> However, we are unable to correctly create the database entries needed to manage
> History and Display.
>
> As we understand it, in C++ this is handled by setting the EQ_SLOW flag, which
> doesn’t seem to be implemented in the Python libraries.
> How can we manually create the necessary variables for Display and History?
I'm not an expert of the Python part of MIDAS (Ben Smith is), but I know that you have
functions to create keys and set values in the ODB, so you should be able to create
these things manually as you need them.
Stefan |
|
3115
|
13 Nov 2025 |
Ben Smith | Suggestion | Python sc_frontend.py Display and History variables | > > We would like to write an sc_frontend in Python instead of C++. All our drivers
> > work correctly, as well as the creation of the database in the ODB, including the
> > creation of Commons, Statistics, Variables, and Settings.
> > However, we are unable to correctly create the database entries needed to manage
> > History and Display.
> >
> > As we understand it, in C++ this is handled by setting the EQ_SLOW flag, which
> > doesn’t seem to be implemented in the Python libraries.
> > How can we manually create the necessary variables for Display and History?
I don't believe any of this is handled automatically by the EQ_SLOW flag in the C++ code. I think you always have to manually create the history plots, normally using the webpage interface.
There is also a function in the python code called "client.hist_create_plot(group_name, panel_name, variables, labels=[])" that can slightly automate this, though you do have to know what midas is internally calling your variables.
You can find out what the variables are called either through the webpage interface when creating a plot, or via the python script at $MIDASSYS/python/examples/basic_hist_script.py |
|
3142
|
25 Nov 2025 |
Konstantin Olchanski | Suggestion | manalyzer root output file with custom filename including run number | Hi, Jonas, thank you for reminding me about this. I hope to work on manalyzer in the next few weeks and I will review the ROOT output file name scheme.
K.O.
> Hi all,
>
> Could you please get back to me about whether something like my earlier suggestion might be considered, or if I should set up some workaround to rename files at EOR for our experiments?
>
> https://daq00.triumf.ca/elog-midas/Midas/3042 :
> -----------------------------------------------
> > Hi all,
> >
> > Would it be possible to extend manalyzer to support custom .root file names that include the run number?
> >
> > As far as I understand, the current behavior is as follows:
> > The default filename is ./root_output_files/output%05d.root , which can be customized by the following two command line arguments.
> >
> > -Doutputdirectory: Specify output root file directory
> > -Ooutputfile.root: Specify output root file filename
> >
> > If an output file name is specified with -O, -D is ignored, so the full path should be provided to -O.
> >
> > I am aiming to write files where the filename contains sufficient information to be unique (e.g., experiment, year, and run number). However, if I specify it with -O, this would require restarting manalyzer after every run; a scenario that I would like to avoid if possible.
> >
> > Please find a suggestion of how manalyzer could be extended to introduce this functionality through an additional command line argument at
> > https://bitbucket.org/krieger_j/manalyzer/commits/24f25bc8fe3f066ac1dc576349eabf04d174deec
> >
> > Above code would allow the following call syntax: ' ./manalyzer.exe -O/data/experiment1_%06d.root --OutputNumbered '
> > But note that as is, it would fail if a user specifies an incompatible format such as -Ooutput%s.root .
> >
> > So a safer, but less flexible option might be to instead have the user provide only a prefix, and then attach %05d.root in the code.
> >
> > Thank you for considering these suggestions! |
|
3143
|
25 Nov 2025 |
Konstantin Olchanski | Suggestion | Improve process for adding new variables that can be shown in history plots | > One aspect of the MIDAS history plotting I find frustrating is the sequence for adding a new history
> variable and then plotting them. ...
this has been a problem in MIDAS for a very long time, we have tried and failed to fix/streamline/improve
it many times and obviously failed. many times.
this is what must happen when adding a new history variable:
1) new /eq/xxx/variables/vvv entry must show up in ODB
1a) add the code for the new data to the frontend
1b) start the frontend
1c) if new variable is added in the frontend init() method, it will be created in ODB, done.
1d) if new variable is added by the event readout code (i.e. via MIDAS event data bank automatically
written to ODB by RO_ODB flags), then we need to start a run.
1e) if this is not periodic event, but beam event or laser event or some other triggered event, we must
also turn on the beam, turn on the laser, etc.
1z) observe that ODB entry exists
3) mlogger must discover this new ODB entry:
3a) mlogger used to rescan ODB each time something in ODB changes, this code was removed
3b) mlogger used to rescan ODB each time a new run is started, this code was removed
3c) mlogger rescans ODB each time it is restarted, this still works.
so sequence is like this: modify, restart frontend, starts a run, stop the run, observe odb entry is
created, restart mlogger, observe new mhf files are created in the history directory.
4) mhttpd must discover that a new mhf file now exists, read it's header to discover history event and
variable names and make them available to the history panel editor.
it is not clear to me that this part currently works:
4a) mhttpd caches the history event list and will not see new variables unless this cache is updated.
4b) when web history panel editor is opened, it is supposed to tell mhttpd to update the cache. I am
pretty sure it worked when I wrote this code...
4c) but obviously it does not work now.
restarting mhttpd obviously makes it load the history data anew, but there is no button to make it happen
on the MIDAS web pages.
so it sounds like I have to sit down and at least retest this whole scheme to see that it works at least
in some way.
then try to improve it:
a) the frontend dance in (1) is unavoidable
b) mlogger must be restarted, I think Stefan and myself agree on this. In theory we could add a web page
button to call an mlogger RPC and have it reload the history. but this button already exists, it's called
"restart mlogger".
c) newly create history event should automatically show up in the history panel editor without any
additional user action
d) document the two intermediate debugging steps:
d1) check that the new variable was created in ODB
d2) check that mlogger created (and writes to) the new history file
this is how I see it and I am open to suggestion, changes, improvements, etc.
K.O. |
|
3145
|
26 Nov 2025 |
Thomas Lindner | Suggestion | Improve process for adding new variables that can be shown in history plots |
> 3) mlogger must discover this new ODB entry:
>
> 3a) mlogger used to rescan ODB each time something in ODB changes, this code was removed
> 3b) mlogger used to rescan ODB each time a new run is started, this code was removed
> 3c) mlogger rescans ODB each time it is restarted, this still works.
>
> so sequence is like this: modify, restart frontend, starts a run, stop the run, observe odb entry is
> created, restart mlogger, observe new mhf files are created in the history directory.
I assume that mlogger rescanning ODB is somewhat intensive process; and that's why we don't want rescanning to
happen every time the ODB is changed?
Stopping/restarting mlogger is okay. But would it be better to have some alternate way to force mlogger to
rescan the ODB? Like an odbedit command like 'mlogger_rescan'; or some magic ODB key to force the rescan. I
guess neither of these options is really any easier for the developer. It just seems awkward to need to restart
mlogger for this.
It would be great if mhttpd can be fixed so that it updates the cache when history editor is opened. |
|
3146
|
26 Nov 2025 |
Lars Martin | Suggestion | mvodb WS and family type matching | This is not a bug per se, but I find it a little odd that the MVOdb functions RS,
RSA, RSAI, and WSA use std::string as their type, while WS ans WSAI use const
char*
Seems to me like simple overloading a la
void WS(const char* varname, const std::string v, MVOdbError* error = NULL){
WS(varname, v.c_str(), v.size(), error);
}
should be all that's needed, right? |
|
3149
|
27 Nov 2025 |
Stefan Ritt | Suggestion | Improve process for adding new variables that can be shown in history plots | > I assume that mlogger rescanning ODB is somewhat intensive process; and that's why we don't want rescanning to
> happen every time the ODB is changed?
A rescan maybe takes some tens of milliseconds. Something you can do on every run, but not on every ODB change (like writing to the slow control values).
We would need a somehow more clever code which keeps a copy of the variable names for each equipment. If the names change or the array size changes,
the scan can be triggered.
> Stopping/restarting mlogger is okay. But would it be better to have some alternate way to force mlogger to
> rescan the ODB? Like an odbedit command like 'mlogger_rescan'; or some magic ODB key to force the rescan. I
> guess neither of these options is really any easier for the developer. It just seems awkward to need to restart
> mlogger for this.
Indeed. But whatever "new" we design for the scan will users complain "last week it was enough to restart the logger, now what do I have to do". So nothing
is perfect. But having a button in the ODB editor like "Rebuild history database" might look more elegant. One issue is that it needs special treatment, since
the logger (in the Mu3e experiment) needs >10s for the scan, so a simple rpc call will timeout.
Let's see what KO has to say on this.
Best,
Stefan |
|
3151
|
27 Nov 2025 |
Konstantin Olchanski | Suggestion | mvodb WS and family type matching | > This is not a bug per se, but I find it a little odd that the MVOdb functions RS,
> RSA, RSAI, and WSA use std::string as their type, while WS ans WSAI use const
> char*
>
> Seems to me like simple overloading a la
> void WS(const char* varname, const std::string v, MVOdbError* error = NULL){
> WS(varname, v.c_str(), v.size(), error);
> }
>
> should be all that's needed, right?
No short answer to this one.
This situation is an excellent example of c++ bloat. Reduced to bare basics:
1) "naive" c++ code:
void foo(std::string xxx) { ... };
int main() { foo("bar"); }
nominally:
a new string object is created to hold "bar"
a new string object is copy-created to pass it as argument to foo()
result:
two object creations (two calls to malloc + constructors)
plus memcpy() of string data. (compiler may or may not optimize the 2nd string)
2) "advanced" c++ code:
void foo(const std::string& xxx) { ... };
int main() { foo("bar"); }
copy-created 2nd string is avoided, but string object to hold "bar" is still must be
made, 1 malloc(), 1 memcpy().
3) "pure C" code:
void foo(const char xxx) { ... };
int main() { foo("bar"); }
address of "bar" (placed in read-only memory) is passed in a register, no malloc(), no
memcpy(), nada, zilch.
One can argue that bloat does not matter, "just buy a bigger computer".
This ignores the fact that malloc() is quite expensive, nominally requires taking a
mutex, and suddenly multiple threads calling foo() are unexpectedly serialized against
the malloc() internal mutex.
I guess you can have an advanced malloc() that uses per-thread memory pools, but now
instead of deterministic "always take a lock", we have non-deterministic "take a lock
sometimes, when per-thread memory pools decide to jockey for more memory".
This type of non-deterministic behaviour is bad for real-time applications.
Ultimately it boils down to personal style, I prefer "C-like" efficiency and
transparency, when I call foo() it is obvious there will be no hidden malloc(), no
hidden mutex.
I guess mvodb could have "const std::string&" version of each "const char*" function,
as if there is too few functions there already...
This problem is not isolated to mvodb, but pertains to any API, including midas.h.
I would say, if most function calls are foo("abc"); then "const char*" version is
sufficient, if most calls are foo(string + "something"); then "const std::string&" is
more appropriate.
K.O. |
|
3153
|
27 Nov 2025 |
Konstantin Olchanski | Suggestion | Improve process for adding new variables that can be shown in history plots | > > I assume that mlogger rescanning ODB is somewhat intensive process; and that's why we don't want rescanning to
> > happen every time the ODB is changed?
>
> A rescan maybe takes some tens of milliseconds. Something you can do on every run, but not on every ODB change (like writing to the slow control values).
> We would need a somehow more clever code which keeps a copy of the variable names for each equipment. If the names change or the array size changes,
> the scan can be triggered.
>
That's right, scanning ODB for history changes is essentially free.
Question is what do we do if something was added or removed.
I see two ways to think about it:
1) history is independent from "runs", we see a change, we apply it (even if it takes 10 sec or 2 minutes).
2) "nothing should change during a run", we must process all changes before we start a run (starting a run takes forever),
and we must ignore changes during a run (i.e. updated frontend starts to write new data to history). (this is why
the trick to "start a new run twice" used to work).
>
> > Stopping/restarting mlogger is okay. But would it be better to have some alternate way to force mlogger to
> > rescan the ODB?
>
It is "free" to rescan ODB every 10 second or so. Then we can output a midas message "please restart the logger",
and set an ODB flag, then when user opens the history panel editor, it will see this flag
and tell the user "please restart the logger to see the latest changes in history". It can even list
the specific changes, if we want ot be verbose about it.
>
> Indeed. But whatever "new" we design for the scan will users complain "last week it was enough to restart the logger, now what do I have to do". So nothing
> is perfect. But having a button in the ODB editor like "Rebuild history database" might look more elegant. One issue is that it needs special treatment, since
> the logger (in the Mu3e experiment) needs >10s for the scan, so a simple rpc call will timeout.
>
I like the elegance of "just restart the logger".
Having a web page button to tell logger to rescan the history is cumbersome technically,
(web page calls mjsonrpc to mhttpd, mhttpd calls a midas rpc to mlogger "please set a flag to rescan the history",
then web page polls mhttpd to poll mlogger for "are you done yet?". or instead of polling,
deal with double timeouts, in midas rpc to mlogger and mjsronrpc timeout in javascript).
And to avoid violating (2) above, we must tell user "you cannot push this button during a run!".
I say, let's take the low road for now and see if it's good enough:
a) have the history system report any changes in midas.log - "history event added", "new history variable added" (or "renamed"),
this will let user see that their changes to the equipment frontend "took" and flag any accidental/unwanted changes.
b) have mlogger periodically scan ODB and set a "please restart me" flag. observe this flag in the history editor
and tell the user "please restart the logger to see latest changes in the history".
K.O. |
|
3156
|
27 Nov 2025 |
Thomas Lindner | Suggestion | Improve process for adding new variables that can be shown in history plots | > > Indeed. But whatever "new" we design for the scan will users complain "last week it was enough to restart the logger, now what do I have to do". So nothing
> > is perfect. But having a button in the ODB editor like "Rebuild history database" might look more elegant. One issue is that it needs special treatment, since
> > the logger (in the Mu3e experiment) needs >10s for the scan, so a simple rpc call will timeout.
> >
>
> I like the elegance of "just restart the logger".
>
> Having a web page button to tell logger to rescan the history is cumbersome technically,
> (web page calls mjsonrpc to mhttpd, mhttpd calls a midas rpc to mlogger "please set a flag to rescan the history",
> then web page polls mhttpd to poll mlogger for "are you done yet?". or instead of polling,
> deal with double timeouts, in midas rpc to mlogger and mjsronrpc timeout in javascript).
>
> And to avoid violating (2) above, we must tell user "you cannot push this button during a run!".
>
> I say, let's take the low road for now and see if it's good enough:
>
> a) have the history system report any changes in midas.log - "history event added", "new history variable added" (or "renamed"),
> this will let user see that their changes to the equipment frontend "took" and flag any accidental/unwanted changes.
>
> b) have mlogger periodically scan ODB and set a "please restart me" flag. observe this flag in the history editor
> and tell the user "please restart the logger to see latest changes in the history".
This seems like a reasonable plan to me (combined with clear documentation).
Thomas |
|