19 May 2025, Jonas A. Krieger, Suggestion, manalyzer root output file with custom filename including run number
|
Hi all,
Would it be possible to extend manalyzer to support custom .root file names that include the run number?
As far as I understand, the current behavior is as follows:
The default filename is ./root_output_files/output%05d.root , which can be customized by the following two command line arguments.
-Doutputdirectory: Specify output root file directory
-Ooutputfile.root: Specify output root file filename
If an output file name is specified with -O, -D is ignored, so the full path should be provided to -O.
I am aiming to write files where the filename contains sufficient information to be unique (e.g., experiment, year, and run number). However, if I specify it with -O, this would require restarting manalyzer after every run; a scenario that I would like to avoid if possible.
Please find a suggestion of how manalyzer could be extended to introduce this functionality through an additional command line argument at
https://bitbucket.org/krieger_j/manalyzer/commits/24f25bc8fe3f066ac1dc576349eabf04d174deec
Above code would allow the following call syntax: ' ./manalyzer.exe -O/data/experiment1_%06d.root --OutputNumbered '
But note that as is, it would fail if a user specifies an incompatible format such as -Ooutput%s.root .
So a safer, but less flexible option might be to instead have the user provide only a prefix, and then attach %05d.root in the code.
Thank you for considering these suggestions! |
16 May 2025, Marius Koeppel, Bug Report, history_schema.cxx fails to build
|
Hi all,
we have a CI setup which fails since 06.05.2025 to build the history_schema.cxx. There was a major change in this code in the commits fe7f6a6 and 159d8d3.
image: rootproject/root:latest
pipelines:
default:
- step:
name: 'Build and test'
runs-on:
- self.hosted
- linux
script:
- apt-get update
- DEBIAN_FRONTEND=noninteractive apt-get -y install python3-all python3-pip python3-pytest-dependency python3-pytest
- DEBIAN_FRONTEND=noninteractive apt-get -y install gcc g++ cmake git python3-all libssl-dev libz-dev libcurl4-gnutls-dev sqlite3 libsqlite3-dev libboost-all-dev linux-headers-generic
- gcc -v
- cmake --version
- git clone https://marius_koeppel@bitbucket.org/tmidas/midas.git
- cd midas
- git submodule update --init --recursive
- mkdir build
- cd build
- cmake ..
- make -j4 install
Error is:
/opt/atlassian/pipelines/agent/build/midas/src/history_schema.cxx:5991:10: error: ‘class HsSqlSchema’ has no member named ‘table_name’; did you mean ‘fTableName’?
5991 | s->table_name = xtable_name;
| ^~~~~~~~~~
| fTableName
/opt/atlassian/pipelines/agent/build/midas/src/history_schema.cxx: In member function ‘virtual int PgsqlHistory::read_column_names(HsSchemaVector*, const char*, const char*)’:
/opt/atlassian/pipelines/agent/build/midas/src/history_schema.cxx:6034:14: error: ‘class HsSqlSchema’ has no member named ‘table_name’; did you mean ‘fTableName’?
6034 | if (s->table_name != table_name)
| ^~~~~~~~~~
| fTableName
/opt/atlassian/pipelines/agent/build/midas/src/history_schema.cxx:6065:16: error: ‘struct HsSchemaEntry’ has no member named ‘fNumBytes’
6065 | se.fNumBytes = 0;
| ^~~~~~~~~
/opt/atlassian/pipelines/agent/build/midas/src/history_schema.cxx:6140:30: error: ‘__gnu_cxx::__alloc_traits<std::allocator<HsSchemaEntry>, HsSchemaEntry>::value_type’ {aka ‘struct HsSchemaEntry’} has no member named ‘fNumBytes’
6140 | s->fVariables[j].fNumBytes = tid_size;
| ^~~~~~~~~
At global scope:
cc1plus: note: unrecognized command-line option ‘-Wno-vla-cxx-extension’ may have been intended to silence earlier diagnostics
make[2]: *** [CMakeFiles/objlib.dir/build.make:384: CMakeFiles/objlib.dir/src/history_schema.cxx.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [CMakeFiles/Makefile2:404: CMakeFiles/objlib.dir/all] Error 2
make: *** [Makefile:136: all] Error 2 |
16 May 2025, Konstantin Olchanski, Bug Report, history_schema.cxx fails to build
|
> we have a CI setup which fails since 06.05.2025 to build the history_schema.cxx.
> There was a major change in this code in the commits fe7f6a6 and 159d8d3.
Missing from this report is critical information: HAVE_PGSQL is set.
I will have to check why it is not set in my development account.
I will have to check why it is not set in our bitbucket build.
Thank you for reporting this problem.
K.O. |
16 May 2025, Konstantin Olchanski, Bug Report, history_schema.cxx fails to build
|
> > we have a CI setup which fails since 06.05.2025 to build the history_schema.cxx.
> > There was a major change in this code in the commits fe7f6a6 and 159d8d3.
>
> Missing from this report is critical information: HAVE_PGSQL is set.
>
> I will have to check why it is not set in my development account.
>
The following is needed to build MySQL and PgSQL support in MIDAS,
they were missing on my development machine. MySQL support was enabled
by accident because kde-bloat packages pull in the MySQL (not the MariaDB)
client and server. Fixed now, added to standard list of Ubuntu packages:
https://daq00.triumf.ca/DaqWiki/index.php/Ubuntu#install_missing_packages
apt -y install mariadb-client libmariadb-dev ### mysql client for MIDAS
apt -y install postgresql-common libpq-dev ### postgresql client for MIDAS
>
> I will have to check why it is not set in our bitbucket build.
>
Added MySQL and PgSQL to bitbucket Ubuntu-24 build (sqlite was already enabled).
>
> Thank you for reporting this problem.
>
Fix committed. Sorry about this problem.
K.O. |
05 May 2025, Konstantin Olchanski, Info, db_delete_key(TRUE)
|
I was working on an odb corruption crash inside db_delete_key() and I noticed
that I did not test db_delete_key() with follow_links set to TRUE. Then I noticed
that nobody nowhere seems to use db_delete_key() with follow_links set to TRUE.
Instead of testing it, can I just remove it?
This feature existed since day 1 (1st commit) and it does something unexpected
compared to filesystem "/bin/rm": the best I can tell, it is removes the link
*and* whatever the link points to. For people familiar with "/bin/rm", this is
somewhat unexpected and by my thinking, if nobody ever added such a feature to
"/bin/rm", it is probably not considered generally useful or desirable. (I would
think it dangerous, it removes not 1 but 2 files, the 2nd file would be in some
other directory far away from where we are).
By this thinking, I should remove "follow_links" (actually just make it do thing
, to reduce the disturbance to other source code). db_delete_key() should work
similar to /bin/rm aka the unlink() syscall.
K.O. |
05 May 2025, Stefan Ritt, Info, db_delete_key(TRUE)
|
I would handle this actually like symbolic links are handled under linux. If you delete a symbolic link, the link gets
detected and NOT the file the link is pointing to.
So I conclude that the "follow links" is a misconception and should be removed.
Stefan |
05 May 2025, Konstantin Olchanski, Bug Report, abort and core dump in cm_disconnect_experiment()
|
I noticed that some programs like mhist, if they take too long, there is an abort and core dump at the very end. This is because they forgot to
set/disable the watchdog timeout, and they got remove from odb and from the SYSMSG event buffer.
mhist is easy to fix, just add the missing call to disable the watchdog, but I also see a similar crash in the mserver which of course requires
the watchdog.
In either case, the crash is in cm_disconnect_experiment() where we know we are shutting down and we know there is no useful information in the
core dump.
I think I will fix it by adding a flag to bm_close_buffer() to bypass/avoid the crash from "we are already removed from this buffer".
Stack trace from mhist:
[mhist,ERROR] [midas.cxx:5977:bm_validate_client_index,ERROR] My client index 6 in buffer 'SYSMSG' is invalid: client name '', pid 0 should be my
pid 3113263
[mhist,ERROR] [midas.cxx:5980:bm_validate_client_index,ERROR] Maybe this client was removed by a timeout. See midas.log. Cannot continue,
aborting...
bm_validate_client_index: My client index 6 in buffer 'SYSMSG' is invalid: client name '', pid 0 should be my pid 3113263
bm_validate_client_index: Maybe this client was removed by a timeout. See midas.log. Cannot continue, aborting...
Program received signal SIGABRT, Aborted.
Download failed: Invalid argument. Continuing without source file ./nptl/./nptl/pthread_kill.c.
__pthread_kill_implementation (no_tid=0, signo=6, threadid=<optimized out>) at ./nptl/pthread_kill.c:44
warning: 44 ./nptl/pthread_kill.c: No such file or directory
(gdb) bt
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=<optimized out>) at ./nptl/pthread_kill.c:44
#1 __pthread_kill_internal (signo=6, threadid=<optimized out>) at ./nptl/pthread_kill.c:78
#2 __GI___pthread_kill (threadid=<optimized out>, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3 0x00007ffff71df27e in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#4 0x00007ffff71c28ff in __GI_abort () at ./stdlib/abort.c:79
#5 0x00005555555768b4 in bm_validate_client_index_locked (pbuf_guard=...) at /home/olchansk/git/midas/src/midas.cxx:5993
#6 0x000055555557ed7a in bm_get_my_client_locked (pbuf_guard=...) at /home/olchansk/git/midas/src/midas.cxx:6000
#7 bm_close_buffer (buffer_handle=1) at /home/olchansk/git/midas/src/midas.cxx:7162
#8 0x000055555557f101 in cm_msg_close_buffer () at /home/olchansk/git/midas/src/midas.cxx:490
#9 0x000055555558506b in cm_disconnect_experiment () at /home/olchansk/git/midas/src/midas.cxx:2904
#10 0x000055555556d2ad in main (argc=<optimized out>, argv=<optimized out>) at /home/olchansk/git/midas/progs/mhist.cxx:882
(gdb)
Stack trace from mserver:
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=138048230684480) at ./nptl/pthread_kill.c:44
44 ./nptl/pthread_kill.c: No such file or directory.
(gdb) bt
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=138048230684480) at ./nptl/pthread_kill.c:44
#1 __pthread_kill_internal (signo=6, threadid=138048230684480) at ./nptl/pthread_kill.c:78
#2 __GI___pthread_kill (threadid=138048230684480, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3 0x00007d8ddbc4e476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#4 0x00007d8ddbc347f3 in __GI_abort () at ./stdlib/abort.c:79
#5 0x000059beb439dab0 in bm_validate_client_index_locked (pbuf_guard=...) at /home/dsdaqdev/packages_common/midas/src/midas.cxx:5993
#6 0x000059beb43a859c in bm_get_my_client_locked (pbuf_guard=...) at /home/dsdaqdev/packages_common/midas/src/midas.cxx:6000
#7 bm_close_buffer (buffer_handle=<optimized out>) at /home/dsdaqdev/packages_common/midas/src/midas.cxx:7162
#8 0x000059beb43a89af in bm_close_all_buffers () at /home/dsdaqdev/packages_common/midas/src/midas.cxx:7256
#9 bm_close_all_buffers () at /home/dsdaqdev/packages_common/midas/src/midas.cxx:7243
#10 0x000059beb43afa20 in cm_disconnect_experiment () at /home/dsdaqdev/packages_common/midas/src/midas.cxx:2905
#11 0x000059beb43afdd8 in rpc_check_channels () at /home/dsdaqdev/packages_common/midas/src/midas.cxx:16317
#12 0x000059beb43b0cf5 in rpc_server_loop () at /home/dsdaqdev/packages_common/midas/src/midas.cxx:15858
#13 0x000059beb4390982 in main (argc=9, argv=0x7ffc07e5bed8) at /home/dsdaqdev/packages_common/midas/progs/mserver.cxx:387
K.O. |
05 May 2025, Stefan Ritt, Bug Report, abort and core dump in cm_disconnect_experiment()
|
I would be in favor of not curing the symptoms, but fixing the cause of the problem. I guess you put the watchdog disable into mhist, right? Usually mhist is called locally, so no mserver should be
involved. If not, I would prefer to propagate the watchdog disable to the mserver side as well, if that's not been done already. Actually I never would disable the watchdog, but set it to a reasonable
maximal value, like a few minutes or so. In that case, the client gets still removed if it crashes for some reason.
My five cents,
Stefan |
05 May 2025, Konstantin Olchanski, Bug Fix, Bug fix in SQL history
|
A bug was introduced to the SQL history in 2022 that made renaming of variable names not work. This is now fixed.
break commit:
54bbc9ed5d65d8409e8c9fe60b024e99c9f34a85
fix commit:
159d8d3912c8c92da7d6d674321c8a26b7ba68d4
P.S.
This problem was caused by an unfortunate design of the c++ class system. If I want to add more data to an existing
class, I write this:
class old_class {
int i,j,k;
}
class bigger_class: public old_class {
int additional_variable;
}
But if I have this:
struct x { int i,j; }
class y {
std::vector<x> array_of_x;
}
and I want to add "k" to "x", c++ has not way to do this. history code has this workaround:
class bigger_y: public y
{
std::vector<int> array_of_k;
}
int bigger_y:foo(int n) {
printf("%d %d %d\", array_of_x[n].i, array_of_x[n].j, array_of_k[n]);
}
problem is that it is not obvious that "array_of_x" and "array_of_k" are connected
and they can easily get out of sync (if elements are added or removed). this is the
bug that happened in the history code. I now added assert(array_of_x.size()==array_of_k.size())
to offer at least some protection going forward.
P.S. As final solution I think I want to completely separate file history and sql history code,
they have more things different than common.
K.O. |
29 Apr 2025, Pavel Murat, Bug Report, ODBXX : ODB links in the path names ?
|
Dear MIDAS experts,
does the ODBXX interface to ODB currently ODB links in the path names? - From what I see so far, it currently fails to do so,
but I could be doing something else wrong...
-- thanks, regards, Pasha |
30 Apr 2025, Stefan Ritt, Bug Report, ODBXX : ODB links in the path names ?
|
Indeed this was missing from the very beginning. I added it, please report back if it's not working.
Stefan |
12 May 2020, Stefan Ritt, Info, New ODB++ API
|
Since the beginning of the lockdown I have been working hard on a new object-oriented interface to the online database ODB. I have the code now in an initial state where it is ready for
testing and commenting. The basic idea is that there is an object midas::odb, which represents a value or a sub-tree in the ODB. Reading, writing and watching is done through this
object. To get started, the new API has to be included with
#include <odbxx.hxx>
To create ODB values under a certain sub-directory, you can either create one key at a time like:
midas::odb o;
o.connect("/Test/Settings", true); // this creates /Test/Settings
o.set_auto_create(true); // this turns on auto-creation
o["Int32 Key"] = 1; // create all these keys with different types
o["Double Key"] = 1.23;
o["String Key"] = "Hello";
or you can create a whole sub-tree at once like:
midas::odb o = {
{"Int32 Key", 1},
{"Double Key", 1.23},
{"String Key", "Hello"},
{"Subdir", {
{"Another value", 1.2f}
}
};
o.connect("/Test/Settings");
To read and write to the ODB, just read and write to the odb object
int i = o["Int32 Key];
o["Int32 Key"] = 42;
std::cout << o << std::endl;
This works with basic types, strings, std::array and std::vector. Each read access to this object triggers an underlying read from the ODB, and each write access triggers a write to the
ODB. To watch a value for change in the odb (the old db_watch() function), you can use now c++ lambdas like:
o.watch([](midas::odb &o) {
std::cout << "Value of key \"" + o.get_full_path() + "\" changed to " << o << std::endl;
});
Attached is a full running example, which is now also part of the midas repository. I have tested most things, but would not yet use it in a production environment. Not 100% sure if there
are any memory leaks. If someone could valgrind the test program, I would appreciate (currently does not work on my Mac).
Have fun!
Stefan
|
20 May 2020, Konstantin Olchanski, Info, New ODB++ API
|
> midas::odb o;
> o["foo"] = 1;
This is an excellent development.
ODB is a tree-structured database, JSON is a tree-structured data format,
and they seem to fit together like hand and glove. For programming
web pages, Javascript and JSON-style access to ODB seems to work really well.
And now with modern C++ we can have a similar API for working with ODB tree data,
as if it were Javascript JSON tree data.
Let's see how well it works in practice!
K.O. |
20 May 2020, Stefan Ritt, Info, New ODB++ API
|
In meanwhile, there have been minor changes and improvements to the API:
Previously, we had:
> midas::odb o;
> o.connect("/Test/Settings", true); // this creates /Test/Settings
> o.set_auto_create(true); // this turns on auto-creation
> o["Int32 Key"] = 1; // create all these keys with different types
> o["Double Key"] = 1.23;
> o["String Key"] = "Hello";
Now, we only need:
o.connect("/Test/Settings");
o["Int32 Key"] = 1; // create all these keys with different types
...
no "true" needed any more. If the ODB tree does not exist, it gets created. Similarly, set_auto_create() can be dropped, it's on by default (thought this makes more sense). Also the iteration over subkeys has
been changed slightly.
The full example attached has been updated accordingly.
Best,
Stefan |
20 May 2020, Pintaudi Giorgio, Info, New ODB++ API
|
All this is very good news. I really wish this were available some months ago: it would have helped me immensely. The old C API was clunky at best.
I really like the idea and looking forward to using it (even if at the moment I do not have the need to) ... |
20 May 2020, Konstantin Olchanski, Info, New ODB++ API
|
> All this is very good news. I really wish this were available some months ago: it would have helped me immensely. The old C API was clunky at best.
> I really like the idea and looking forward to using it (even if at the moment I do not have the need to) ...
Yes, I have designed new C-style MIDAS ODB APIs twice now (VirtualOdb in ROOTANA and MVOdb in ROOTANA and MIDAS),
and I was never happy with the results. There is too many corner cases and odd behaviour. Let's see how
this C++ interface shakes out.
For use in analyzers, Stefan's C++ interface still need to be virtualized - right now it has only one implementation
with the MIDAS ODB backend. In analyzers, we need XML, JSON (and a NULL ODB) backends. The API looks
to be clean enough to add this, but I have not looked at the implementation yet. So "watch this space" as they say.
K.O. |
30 Apr 2025, Stefan Ritt, Info, New ODB++ API
|
I had to change the ODBXX API: https://bitbucket.org/tmidas/midas/commits/273c4e434795453c0c6bceb46bac9a0d2d27db18
The old C API is case-insensitive, meaning db_find_key("name") returns a key "name" or "Name" or "NAME". We can discuss if this is good or bad, but that's how it is since 30 years.
I now realized the the ODBXX API keys is case sensitive, so a o["NAME"] does not return any key "name". Rather, it tries to create a new key which of course fails. I changed therefore
the ODBXX to become case-insensitive like the old C API.
Stefan |
30 Apr 2025, Pavel Murat, Info, New ODB++ API
|
it is a very convenient interface! Does it support the ODB links in the path names ? -- thanks, regards, Pasha |
16 Apr 2025, Thomas Lindner, Info, MIDAS workshop (online) Sept 22-23, 2025
|
Dear MIDAS enthusiasts,
We are planning a fifth MIDAS workshop, following on from previous successful
workshops in 2015, 2017, 2019 and 2023. The goals of the workshop include:
- Getting updates from MIDAS developers on new features, bug fixes and planned
changes.
- Getting reports from MIDAS users on how they are using MIDAS and what problems
they are facing.
- Making plans for future MIDAS changes and improvements
We are planning to have an online workshop on Sept 22-23, 2025 (it will coincide
with a visit of Stefan to TRIUMF). We are tentatively planning to have a four
hour session on each day, with the sessions timed for morning in Vancouver and
afternoon/evening in Europe. Sorry, the sessions are likely to again not be well
timed for our colleagues in Asia.
We will provide exact times and more details closer to the date. But I hope
people can mark the dates in their calendars; we are keen to hear from as much of
the MIDAS community as possible.
Best Regards,
Thomas Lindner |
08 Apr 2025, Lukas Mandokk, Info, MSL Syntax Highlighting Extension for VSCode (Release)
|
Hello everyone,
I just wanted to let you know, that I published a MSL Syntax Highlighting Extension for VSCode.
It is still in a quite early stage, so there might be some missing keywords and edge cases which are not fully handled. So in case you find any issues or have suggestions for improvements, I am happy to implement them. Also I only tested it with a custom theme (One Monokai), so it might look very different with the default theme and other ones.
The extension is called "MSL Syntax Highlighter" and can be found in the extension marketplace in VSCode. (vscode marketplace: https://marketplace.visualstudio.com/items?itemName=LukasMandok.msl-syntax-highlighter, github repo: https://github.com/LukasMandok/msl-syntax-highlighter)
One additional remark:
- To keep a consitent style with existing themes, one is a bit limited in regard to colors. For this reason a distinction betrween LOOP and IF Blocks is not really possible without writing a custom theme. A workaround would be to add the theming in the custom user settings (explained in the readme). |
01 Apr 2025, Lukas Gerritzen, Suggestion, Sequencer ODBSET feature requests
|
I would like to request the following sequencer features if you find the ideas as sensible as I do:
- A "Reload File" button
- Support for patterns in ODBSET, e.g.:
-
ODBSET "/Path/value[1,3,5]", 1 -
ODBSET "/Path/value[1-5,7-9]", 1 - Arbitrary combinations of the above
- Support for variable substitution:
-
SET GOODCHANNELS, "1-5,7,9"; ODBSET "/Path/value[$GOODCHANNELS]", 1 -
SET BADCHANNELS, "6,8"; ODBSET "/Path/value[!$BADCHANNELS]", 1 -
ODBSET "/Path/value[0-100, except $BADCHANNELS]", 1
To add some context: I am using the sequencer for a voltage scan of several thousand channels. However, a few dozen of them have shorts, so I cannot simply set all demands to the voltage step. Currently, this is solved with a manually-created ODB file for each individual voltage step, but as you can imagine, this is quite difficult to maintain.
I also encountered a small annoyance in the current workflow of editing sequencer files in the browser:
- Load a file
- Double-click it to edit it, acknowledge the "To edit the sequence it must be opened in an editor tab" dialog
- A new tab opens
- Edit something, click "Start", acknowledge the "Save and start?" dialog (which pops up even if no changes are made)
- Run the script
- Double-click to make more changes -> another tab opens
After a while, many tabs with the same file are open. I understand this may be considered "user error", but perhaps the sequencer could avoid opening redundant tabs for the same file, or prompt before doing so?
Thanks for considering these suggestions! |
01 Apr 2025, Lukas Gerritzen, Suggestion, Sequencer ODBSET feature requests
|
While trying to simplify the existing spaghetti code, I encountered problems with type safety. Compare the following:SET v, "54"
SET file, "MPPCHV_$v.odb"
ODBLOAD $file -> successfully loads MPPCHV_54.odb
SET v, "54.2"
SET file, "MPPCHV_$v.odb"
ODBLOAD $file -> Error reading file "[...]/MPPCHV_54.200000.odb"
The "54.2" appears to be stored as a float rather than a string. Maybe "54" was stored as an integer? I don't know how to verify this in odbedit.
Actually, I would be fine with setting the value as a float, as it allows arithmetic. In that case, I would appreciate something like a SPRINTF function in MSL:SET v, 54.2
SPRINTF file, "MPPCHV_%f.odb", $v
ODBLOAD $file Or, maybe a bit more modern, something akin to Python's f-stringsODBLOAD f"MPPCHV_{v:.1f}.odb" |
01 Apr 2025, Stefan Ritt, Suggestion, Sequencer ODBSET feature requests
|
A new sequencer which understands Python is in the works. There you can use all features from that language.
Stefan |
01 Apr 2025, Stefan Ritt, Suggestion, Sequencer ODBSET feature requests
|
The extended ODBSET[x,y1-y2,z] could make sense to be implemented, since it then will match the alarm system which uses the same syntax.
The $GOODCHANNELS/$BADCHANNELS is however a very strange syntax which I haven't seen in any other computer language. It would take me probably several days to properly implement this, while it would take you much less time to explicitly use a few ODBSET statements to set the bad channels to zero.
For the file edit workflow, the author of the editor will have a look.
Stefan
Lukas Gerritzen wrote: | I would like to request the following sequencer features if you find the ideas as sensible as I do:
- A "Reload File" button
- Support for patterns in ODBSET, e.g.:
-
ODBSET "/Path/value[1,3,5]", 1 -
ODBSET "/Path/value[1-5,7-9]", 1 - Arbitrary combinations of the above
- Support for variable substitution:
-
SET GOODCHANNELS, "1-5,7,9"; ODBSET "/Path/value[$GOODCHANNELS]", 1 -
SET BADCHANNELS, "6,8"; ODBSET "/Path/value[!$BADCHANNELS]", 1 -
ODBSET "/Path/value[0-100, except $BADCHANNELS]", 1
|
|
01 Apr 2025, Konstantin Olchanski, Suggestion, Sequencer ODBSET feature requests
|
> ODBSET "/Path/value[1,3,5]"
> ODBSET "/Path/value[1-5,7-9]"
we support this array index syntax in several places,
specifically, in javascript odb get and set mjsonrpc RPCs.
> SET GOODCHANNELS, "1-5,7,9"; ODBSET "/Path/value[$GOODCHANNELS]"
> SET BADCHANNELS, "6,8"; ODBSET "/Path/value[!$BADCHANNELS]"
> ODBSET "/Path/value[0-100, except $BADCHANNELS]"
this is very clever syntax, but I have not seen any programming
language actually implement it (not even perl).
there must be a good reason why nobody does this. probably we should not do it either.
but as Stefan said (and my opinion), the route of extending MIDAS sequencer
language until it becomes a superset of python, perl, tcl, bash, javascript
and algol is not a sustainable approach. I once looked at using LUA for this,
but I think basing off an full featured programming language like python
is better.
K.O. |
01 Apr 2025, Pavel Murat, Suggestion, Sequencer ODBSET feature requests
|
I once looked at using LUA for this,
> but I think basing off an full featured programming language like python
> is better.
if it came to a vote, my vote would go to Lua: it would allow to do everything needed,
with much less external dependencies and with much less motivation to over-use the interpreter.
The CMS experience was very teaching in this respect...
-- my 2c, regards, Pasha |
02 Apr 2025, Konstantin Olchanski, Suggestion, Sequencer ODBSET feature requests
|
> I once looked at using LUA for this
>
> > but I think basing off an full featured programming language like python
> > is better.
>
> if it came to a vote, my vote would go to Lua: it would allow to do everything needed,
> with much less external dependencies and with much less motivation to over-use the interpreter.
> The CMS experience was very teaching in this respect...
Unfortunately I am only slightly aware of Lua to say how nicve or how bad it is. And we are
not sure how well it supports the single-line-stepping that permits the nice graphical
visualization of Stefan's sequencer.
It looks like python has the single-line-stepping built-in as a standard feature
and python is a more popular and more versatile machine, so to me python looks
like a better choice compared to lua (obscure), perl ("nobody uses it anymore")
or bash (ugly syntax).
K.O. |
02 Apr 2025, Stefan Ritt, Suggestion, Sequencer ODBSET feature requests
|
And there is one more argument:
We have a Python expert in our development team who wrote already the Python-to-C bindings. That means when running a Python
script, we can already start/stop runs, write/read to the ODB etc. We only have to get the single stepping going which seems feasible to
me, since there are some libraries like inspect.currentframe() and traceback.extract_stack(). For single-stepping there are debug APIs
like debugpy. With Lua we really would have to start from scratch.
Stefan |
07 Apr 2025, Zaher Salman, Suggestion, Sequencer ODBSET feature requests
|
Lukas Gerritzen wrote: |
I also encountered a small annoyance in the current workflow of editing sequencer files in the browser:
- Load a file
- Double-click it to edit it, acknowledge the "To edit the sequence it must be opened in an editor tab" dialog
- A new tab opens
- Edit something, click "Start", acknowledge the "Save and start?" dialog (which pops up even if no changes are made)
- Run the script
- Double-click to make more changes -> another tab opens
After a while, many tabs with the same file are open. I understand this may be considered "user error", but perhaps the sequencer could avoid opening redundant tabs for the same file, or prompt before doing so?
Thanks for considering these suggestions! |
The original reason the restricting edits in the first tab is that it is used to reflect the state of the sequencer, i.e. the file that is currently loaded in the ODB.
Imagine two users are working in parallel on the same file, each preparing their own sequence. One finishes editing and starts the sequencer. How would the second person know that by now the file was changed and is running?
I am open to suggestions to minimize the number of clicks and/or other options to make the first tab editable while making it safe and visible to all other users. Maybe a lock mechanism in the ODB can help here.
Zaher |
07 Apr 2025, Stefan Ritt, Suggestion, Sequencer ODBSET feature requests
|
If people are simultaneously editing scripts this is indeed an issue, which probably never can be resolved by technical means. It need communication between the users.
For the main script some ODB locking might look like:
- First person clicks on "Edit", system checks that file is not locked and sequencer is not running, then goes into edit mode
- When entering edit mode, the editor puts a lock in to the ODB, like "Scrip lock = pc1234".
- When another person clicks on "Edit", the system replies "File current being edited on pc1234"
- When the first person saves the file or closes the web browser, the lock gets removed.
- Since a browser can crash without removing a lock, we need some automatic lock recovery, like if the lock is there, the next users gets a message "file currently locked. Click "override" to "steal" the lock and edit the file".
All that is not 100% perfect, but will probably cover 99% of the cases.
There is still the problem on all other scripts. In principle we would need a lock for each file which is not so simple to implement (would need arrays of files and host names).
Another issue will arise if a user opens a file twice for editing. The second attempt will fail, but I believe this is what we want.
A hostname for the lock is the easiest we can get. Would be better to also have a user name, but since the midas API does not require a log in, we won't have a user name handy. At it would be too tedious to ask "if you want to edit this file, enter your username".
Just some thoughts.
Stefan |
30 Jan 2025, Pavel Murat, Forum, converting non-MIDAS slow control data into MIDAS history format ?
|
Dear MIDAS experts,
I have a time series of slow control measurements in an ASCII format -
data records in a format (run_number, time, temperature, voltage1, ..., voltageN),
and, if possible, would like to convert them into a MIDAS history format.
Making MIDAS events out of that data is easy, but is it possible to preserve
the time stamps? - Logically, this boils down to whether it is possible to have
the event time set by a user frontend
-- as always - many thanks, regards, Pasha |
31 Jan 2025, Pavel Murat, Forum, converting non-MIDAS slow control data into MIDAS history format ?
|
I think I found an answer to my question: a user-controlled event header does have a time stamp:
https://daq00.triumf.ca/MidasWiki/index.php/Event_Structure#Event_Header
-- apologies for the spam, regards, Pasha |
01 Feb 2025, Pavel Murat, Bug Report, MIDAS history system not using the event timestamps ? 
|
> I have a time series of slow control measurements in an ASCII format -
> data records in a format (run_number, time, temperature, voltage1, ..., voltageN),
> and, if possible, would like to convert them into a MIDAS history format.
>
> Making MIDAS events out of that data is easy, but is it possible to preserve
> the time stamps? - Logically, this boils down to whether it is possible to have
> the event time set by a user frontend
It looks that the original question was not as naive as I expected and may be pointing to a subtle bug.
I have implemented a python frontend - essentially a clone of
https://bitbucket.org/tmidas/midas/src/develop/python/midas/frontend.py
reading the old slow control data and setting the event.header.timestamp's to some dates from the year of 2022.
When I run MIDAS and read the "old slow control events", one event in 10 seconds,
the MIDAS Event Dump utility shows the data with the correct event timestamps, from the year of 2022.
However the history plots show the event parameters with the timestamps from Feb 01 2025 and the adjacent
data points separated by 10 sec.
Is it possible that the history system uses its own timestamp setting instead of using timestamps from the event headers?
- Under normal circumstances, the two should be very close, and that could've kept the issue hidden...
-- thanks, regards, Pasha
UPDATE: I attached the frontend code and the input data file it is reading. The data file should reside in the local directory
- the frontend code doesn't have everything fully automated for the test,
-- an integer field "/Mu2e/Offline/Ops/LastTime" would need to be created manually
-- the history plots would need to be declared manually |
01 Apr 2025, Pavel Murat, Bug Report, MIDAS history system not using the event timestamps ?
|
Dear MIDAS experts,
I confirm that when writing out history files corresponding to the slow control event data,
MIDAS history system timestamps the data not with the event time coming from the event data,
but with the current time determined by the program -
https://bitbucket.org/tmidas/midas/src/293d27fad0c87c80c4ed7b94b5c40ba1e150bea4/progs/mlogger.cxx#lines-5321
where 'now' is defined as
time_t now = time(NULL);
I'm looking for a way to timestamp the history data with the event time - that is important
for HEP applications outside the DAQ domain. Yes, MIDAS infrastructure is very well suited for that,
there could have a number of such applications, and experiments could significantly benefit from that.
So I'm wondering whether the implementation is a design choice made or it could be changed.
The change itself and especially its validation may require a non-negligible amount of work - I'd be happy to contribute.
Any insight much appreciated.
-- thanks, regards, Pasha |
01 Apr 2025, Konstantin Olchanski, Bug Report, MIDAS history system not using the event timestamps ?
|
> I confirm that when writing out history files corresponding to the slow control event data,
> MIDAS history system timestamps the data not with the event time coming from the event data,
> but with the current time determined by [mlogger].
This is correct. The timestamp in the history file is the mlogger timestamp.
In theory we could use the ODB "last_written" timestamp, but in practice,
timestamps are 1 second granularity and the difference between the two
timestamps normally would be less than 1 second. (time to react to db_watch()).
But ODB last_written also is not the data timestamp. For remote connected clients
it includes the mserver communication delay.
What is the data timestamp, only the user knows - for some FPGA based equipments,
I can see the data timestamp being read from an FPGA register together with the data.
But back to earth.
For making history plots, 1 second granularity with a small (a few seconds) delay should be okey,
and I think the mserver timestamp is good enough.
For data analysis, you are reading history data from a history data file and you are
not constrained to using the MIDAS timestamp.
You can always include your "true" data timestamp as the first value in your data.
We do this in felaview for writing labview data to midas history in the ALPHA antihydrogen experiment at CERN.
This also anticipates your next request, can we have millisecond, microsecond, nanosecond history timestamps:
since you define your "true" data timestamp, you an make it anything you want. (I use "double" time in seconds,
64-bit IEEE-754 "double" has enough precision for microsecond granularity. FPGA based devices can have timestamps
with 10 ns or 8 ns granularity, in this case a uint64_t clock counter could be more appropriate).
K.O. |
02 Apr 2025, Pavel Murat, Bug Report, MIDAS history system not using the event timestamps ?
|
> You can always include your "true" data timestamp as the first value in your data.
Are you saying that if the first data word of a history event were a timestamp,
the MIDAS history system, when plotting the time dependencies, would use that timestamp
instead of the mlogger timestamp?
if that is true, what tells MIDAS that the first data word is the timestamp?
I couldn't find a discussion of that on the page describing the history system -
https://daq00.triumf.ca/MidasWiki/index.php/History_System#Frontend_history_event
- perhaps I should be looking at a different page?
-- thanks again, regards, Pasha |
02 Apr 2025, Konstantin Olchanski, Bug Report, MIDAS history system not using the event timestamps ?
|
> > You can always include your "true" data timestamp as the first value in your data.
>
> Are you saying that if the first data word of a history event were a timestamp,
> the MIDAS history system, when plotting the time dependencies, would use that timestamp
> instead of the mlogger timestamp?
>
you are correct, midas knows nothing about what you put in the history data.
what I suggested is: if you want your true data timestamp recorded in the history,
you can put it into the history data yourself, and I suggested using the 1st value,
but you can also make it the last value or the 10th value, it is up to you.
for making history plots, the history timestamp is used, as you wrote and I confirmed,
this timestamp is generated by mlogger.
what is not clear to me is why this is a problem? do you see a big difference between the
true data timestamp and the mlogger data timestamp? bigger than 1 second? (this would change
the shape of "last 10 minutes" plots (600 seconds). bigger than 1 minute? (this would change
the shape of "last 1 hour plots" (60 minutes, 3600 seconds).
that said, note that we currently store the timestamp as a DWORD 32-bit UNIX time value
which will overflow in 2038 and which is quickly becoming incompatible with the ongoing
switch to 64-bit time_t. Ubuntu-24 already build a large number of system libraries with 64-
bit time_t and building MIDAS with 32-bit time_t may soon become as difficult as building
32-bit MIDAS for 32-bit i686 VME processors. we have to move with the times.
what it means is that the history system data format will have to be updated to 64-bit
time_t and at the same time, we may try to change the timestamp from mlogger-generated to
frontend-generated.
but it is still not clear to me how that helps you, because the frontend-generated timestamp
is not the true data timestamp that you wanted. (and only you know what the true data
timestamp is and where it comes from and how to tell it to MIDAS).
K.O. |
01 Apr 2025, Konstantin Olchanski, Bug Report, ODB corruption
|
We see ODB corruption crashes in the DS20k vertical slice MIDAS instance.
Crash is memset() called by db_delete_key1() called by cm_connect_experiment().
I look at the source code and I see that ODB pkey and hkey validation is absent
from most iterators and it is possible for "bad" pkey to cause corruption. Many
other places in the ODB code use db_get_pkey() and db_validate_hkey() to prevent
invalid data from causing further corruption and breakage.
Also db_delete_key1() needs to be refactored and renamed db_delete_key_wlocked().
I will not do this immediately today, but hopefully next week or so.
Stack trace is attached, observe how free_data() was called on a completely invalid pkey,
bad pkey->type, bad pkey sizes, etc.
#0 __memset_avx512_unaligned_erms () at ../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S:250
250 ../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S: No such file or directory.
(gdb) bt
#0 __memset_avx512_unaligned_erms () at ../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S:250
#1 0x00005ad4102b4217 in memset (__len=<optimized out>, __ch=0, __dest=<optimized out>) at /usr/include/x86_64-linux-
gnu/bits/string_fortified.h:59
#2 free_data (pheader=pheader@entry=0x75aaea4f4000, address=0x75aaed4cea50, size=<optimized out>, caller=caller@entry=0x5ad4102ffb6c
"db_delete_key1") at /home/dsdaqdev/packages_common/midas/src/odb.cxx:513
#3 0x00005ad4102b6a5b in free_data (caller=0x5ad4102ffb6c "db_delete_key1", size=<optimized out>, address=<optimized out>,
pheader=0x75aaea4f4000) at /home/dsdaqdev/packages_common/midas/src/odb.cxx:453
#4 db_delete_key1 (hDB=1, hKey=<optimized out>, level=<optimized out>, follow_links=0) at
/home/dsdaqdev/packages_common/midas/src/odb.cxx:3789
#5 0x00005ad4102b6979 in db_delete_key1 (hDB=1, hKey=288672, level=0, follow_links=0) at
/home/dsdaqdev/packages_common/midas/src/odb.cxx:3731
#6 0x00005ad4102cc923 in db_create_record (hDB=hDB@entry=1, hKey=hKey@entry=0, orig_key_name=orig_key_name@entry=0x7ffd75987280
"/Programs/ODBEdit", init_str=<optimized out>) at /home/dsdaqdev/packages_common/midas/src/odb.cxx:12916
#7 0x00005ad4102cca73 in db_create_record (hDB=hDB@entry=1, hKey=hKey@entry=0, orig_key_name=orig_key_name@entry=0x7ffd75987280
"/Programs/ODBEdit", init_str=<optimized out>) at /home/dsdaqdev/packages_common/midas/src/odb.cxx:12942
#8 0x00005ad4102a00ba in cm_set_client_info (hDB=1, hKeyClient=0x7ffd75987420, host_name=0x5ad412262ee0 "dsdaqgw.triumf.ca",
client_name=0x7ffd759874c0 "ODBEdit", hw_type=<optimized out>, password=<optimized out>, watchdog_timeout=<optimized out>)
at /usr/include/c++/11/bits/basic_string.h:194
#9 0x00005ad4102a902b in cm_connect_experiment1 (host_name=<optimized out>, host_name@entry=0x7ffd759876c0 "",
default_exp_name=default_exp_name@entry=0x7ffd759876a0 "vslice", client_name=client_name@entry=0x5ad4102f28fa "ODBEdit",
func=func@entry=0x0,
odb_size=odb_size@entry=1048576, watchdog_timeout=<optimized out>, watchdog_timeout@entry=10000) at
/usr/include/c++/11/bits/basic_string.h:194
#10 0x00005ad41027e58d in main (argc=3, argv=0x7ffd759881e8) at /home/dsdaqdev/packages_common/midas/progs/odbedit.cxx:3025
(gdb) up
...
#4 db_delete_key1 (hDB=1, hKey=<optimized out>, level=<optimized out>, follow_links=0) at
/home/dsdaqdev/packages_common/midas/src/odb.cxx:3789
3789 free_data(pheader, (char *) pheader + pkey->data, pkey->total_size, "db_delete_key1");
(gdb) p pkey
$1 = (KEY *) 0x75aaea53b400
(gdb) p *pkey
$2 = {type = 1684370529, num_values = 0, name = '\000' <repeats 16 times>, "xQ\375\002\004\000\000\000\004\000\000\000\a\000\000", data = 0,
total_size = 290944, item_size = 1743544378, access_mode = 0, notify_count = 0, next_key = 15, parent_keylist = 1,
last_written = 1953785965}
(gdb)
K.O. |
01 Apr 2025, Konstantin Olchanski, Bug Fix, ODB and event buffer - release semaphore before abort() and core dump
|
There is a long standing problem with ODB and event buffers. If they detect an
internal data inconsistency and cannot continue running, they call abort() to
dump core and stop.
Problem is in some code paths, they do this while holding the ODB or event
buffer semaphore. (Linux kernel automatically releases SYSV semaphores after
core dump is finished and program holding them is stopped).
If core dump takes longer than 10 seconds (for whatever reason, but we see this
often enough), all other programs that wait for ODB or event buffer access, will
also timeout and also crash (with core dump). Result is a core dump storm, at
the end all MIDAS programs are crashed. (Luckily recovery is easy, simply
restart everything).
Now I realize that in many situation, we do not need to hold the semaphore while
dumping core - the content of ODB and event buffer shared memories is not
important for debugging the crash - and it is safe to release the semaphore
before calling abort().
This is now implemented for ODB and event buffers. Hopefully core dump storms
will not happen again.
commit 96369c29deba1752fd3d25bed53e6594773d7e1a
release ODB semaphore before calling abort() to dump core. if core dump takes
longer than 10 sec all other midas programs will timeout and crash.
commit 2506406813f1e7581572f0d5721d3761b7c8e8dd
unlock event buffer before calling abort() in bm_validate_client_index_locked(),
refactor bm_get_my_client_locked()
K.O. |
30 Mar 2025, Konstantin Olchanski, Bug Fix, manalyzer improvements
|
updated manalyzer:
- similar to --jsroot switch, in online mode, the ROOT output file remains open after run is stopped. Previously, after run was
stopped, all histograms & etc would disappear from JSROOT, making it hard to look at the full collected and analyzed data.
- there was a buglet in the multithreading code, if some module cannot analyze flow events as fast as we can read data from disk,
the flow event queue of the first module thread would grow and grow and grow infinitely, potentially consume lots of RAM. This is
because control of queue size for the first module thread was disabled to avoid a deadlock. I now added the queue size check to the
main event loop (both offline mode and online mode) and this problem should now be fixed.
- also adjusted the default queue size from 100 to 1000 and queue-full wait sleep time from 100 us to 10 us.
- another buglet was in the flow event processing. per the README, module EndRun() should not generate flow events (instead, they
should be generated in PreEndRun()). Previously this was not enforced, now there is an error message about this and the offending
flow events are deleted. (they were not being processed anyway).
K.O. |
28 Mar 2025, Konstantin Olchanski, Bug Fix, manalyzer -R8082 --jsroot
|
When processing MIDAS files offline, JSROOT did not work, -Rxxx worked, http
connection would open, but would not serve any histograms. This should now be
fixed.
In addition, normally, after processing all input MIDAS files, manalyzer would
exit, JSROOT would abruptly stop. To look at final results one had to open the
ROOT files using some other method (roody, TBrowser, mjsroot, etc).
I now added a command line switch "--jsroot", if supplied, after processing all
input MIDAS files, manalyzer will keep running in the JSROOT server mode (same as
mjsroot).
"manalyzer -R8082 --jsroot run*.mid.lz4" now does something useful: open
http://localhost:8082 (or ssh tunnel or mhttpd proxy per my mjsroot message) and
watch histograms fill in real time, after analysis finishes, keep looking at the
final results until bored. stop manalyzer using Ctrl-C. (we should add a "Stop
JSROOT" botton to the JSROOT main page).
MIDAS commit 1d0d6448c3ec4ffd225b8d2030fe13e379fcd007
K.O. |
06 Jan 2025, Alexandr Kozlinskiy, Suggestion, improved find_package behaviour for Midas
|
currently to link Midas to project one has to do several steps in cmake script:
- do `find_package`
- get Midas location from MIDASSYS, or from MIDAS_LIBRARY_DIRS
- set MIDAS_INCLUDE_DIRS, MIDAS_LIBRARY_DIRS and MIDAS_LIBRARIES to your target
- add sources from Midas for mfe, drivers, etc.
in general cmake already can to all of this automatically, and the only lines you would need are:
- do `find_package(Midas ... PATHS ~/midas_install_location)`
- and do `target_link_libraries(... midas::mfe)`
(and all include dirs, libs, and deps are propagated automatically)
see PR https://bitbucket.org/tmidas/midas/pull-requests/48
- nothing should break with current setups
- if you want to try new `midas::` targets, try to link e.g. `midas::mfed` to your frontend |
09 Jan 2025, Stefan Ritt, Suggestion, improved find_package behaviour for Midas
|
After some iterations, we merged the branch with the new build scheme. Now you can compile any midas program as described at
https://bitbucket.org/tmidas/midas/pull-requests/48?link_source=email
A default CMakeLists.txt file can look like this:
cmake_minimum_required(VERSION 3.17)
project(example)
find_package(Midas REQUIRED PATHS $ENV{MIDASSYS})
add_executable(example example.cpp)
target_link_libraries(example midas::midas)
Which is much simpler than what we had before. The trick now is that the find_package() retrieves all include and link files automatically.
There are different targets:
midas::midas - normal midas program
midas::midas-shared - normal midas programs using the shared midas library
midas::mfe - old style mfe.cxx frontend
midas::mfed - newer style frontend using mfed.cxx
midas::mscb - programs using MSCB system
midas::drivers - slow control program using any of the standard midas drivers
We are not absolutely sure that all midas installations will work that way, so far we have tested it on RH8, MacOSX with cmake version
3.29.5.
Comments and bug reports are welcome as usual.
Alex and Stefan |
20 Mar 2025, Konstantin Olchanski, Suggestion, improved find_package behaviour for Midas
|
> After some iterations, we merged the branch with the new build scheme.
the commit to implement this change in the manalyzer was not pushed, for reasons unknown.
fixed, commit f2b4dc87ca4830f6bed8667d6a4ee4afd6d242a1
K.O. |
20 Mar 2025, Konstantin Olchanski, Suggestion, improved find_package behaviour for Midas
|
> currently to link Midas to project one has to do several steps ...
this information is incorrect. please read https://daq00.triumf.ca/elog-midas/Midas/2258
a very simple way to use link MIDAS using midas-targets.cmake has been implemented a long time ago.
before proposing a new way of doing things, it would be nice to hear about shortcomings
of the existing stuff. A simple "Konstantin's way sucks" or "this is not the cmake way!"
would have been sufficient.
K.O. |
21 Mar 2025, Alex Kozlinski, Suggestion, improved find_package behaviour for Midas
|
> > currently to link Midas to project one has to do several steps ...
>
> this information is incorrect. please read https://daq00.triumf.ca/elog-midas/Midas/2258
>
> a very simple way to use link MIDAS using midas-targets.cmake has been implemented a long time ago.
I admit that i did not see your post about targets import
via `include($ENV{MIDASSYS}/lib/midas-targets.cmake)`
before implementing changes to cmake scripts.
But in this respect the way you propose to do it via `include` should still work.
Note however that `include(...)` way is very unusual as one have to know exactly
where `...-targets.cmake` is located and standard way in cmake is via `find_package`
(similar to how e.g. ROOT, Geant4, etc. are found and linked).
The things that changed (and are incompatible with what was before)
is the naming of targets (in `midas-targets.cmake` with `midas::` namespace,
which is standard practice in cmake to distinguish cmake targets from bare library names
(e.g. when you do `link_libraries(midas)` it may be interpreted as linking with `-lmidas`
or if target is defined it does machinery to link actual cmake target; the namespace way
makes it unambiguous).
Though i again admit that maybe the namespace change was a bit too much as it may
have broken previous users of `include($ENV{MIDASSYS}/lib/midas-targets.cmake)`
>
> before proposing a new way of doing things, it would be nice to hear about shortcomings
> of the existing stuff.
- shortcomings of what was before is usage of non-standard `include(...)`
- one shortcoming i see for new implementation is usage `midas::` namespace
(mentioned above) that may have broken some setups
> A simple "Konstantin's way sucks" or "this is not the cmake way!"
> would have been sufficient.
- `find_package` is standard and recommended way of finding packages
- note that `include($ENV{MIDASSYS}/lib/midas-targets.cmake)` should still work
(but with usage of `midas::midas` instead of simply `midas`)
- in the end `find_package` works by locating and loading `MidasConfig.cmake`,
and it now actually does `include("${CMAKE_CURRENT_LIST_DIR}/../../midas-targets.cmake")`,
so in this respect `find_package` is the same as `include(...)`,
but it also preserves old behavior of exporting cmake vars for includes/libs
such that prev uses are unaffected,
and does a bit more checking such that it can be used for both in- and out-of-tree builds
- in addition `find_package` allows to handle components,
e.g. now it is possible to do
`find_package(Midas COMPONENTS manalyzer)`
instead of also doing `include($ENV{MIDASSYS}/lib/manalyzer-targets.cmake)`
>
> K.O.
Alex |
21 Mar 2025, Konstantin Olchanski, Suggestion, improved find_package behaviour for Midas
|
> > > currently to link Midas to project one has to do several steps ...
> > this information is incorrect. please read https://daq00.triumf.ca/elog-midas/Midas/2258
>
> I admit that i did not see your post about targets import
> via `include($ENV{MIDASSYS}/lib/midas-targets.cmake)`
> before implementing changes to cmake scripts.
>
> But in this respect the way you propose to do it via `include` should still work.
>
I proposed nothing, you did the proposing. I spent many hours trying to understand cmake (mission
impossible!) and many more hours to implement the previously existing package scheme based
on the cmake "EXPORT" function.
> Note however that `include(...)` way is very unusual as one have to know exactly
> where `...-targets.cmake` is located and standard way in cmake is via `find_package`
> (similar to how e.g. ROOT, Geant4, etc. are found and linked).
Very difficult to cut-and-paste "include($ENV{MIDASSYS}/lib/midas-targets.cmake)".
You cannot simplify-out $ENV{MIDASSYS} because computer cannot read your mind, which of the 10 copies
of midas you want to use from which user account on which day.
Argument about "very unusual" I would buy, I am not a cmake expert and I do not know which package
finding method is in favour today.
>
> Though i again admit that maybe the namespace change was a bit too much as it may
> have broken previous users of `include($ENV{MIDASSYS}/lib/midas-targets.cmake)`
>
I believe it did break at least one experiment, after updating MIDAS to latest version,
the analyzer would not build.
Speaking of which, did you implement your new scheme for the manalyzer so that it works
in standalone mode (without MIDAS)?
If you did not, now we have two schemes, your new scheme just for MIDAS and my old scheme
for manalyzer *and* MIDAS. xkcd 927.
>
> - shortcomings of what was before is usage of non-standard `include(...)`
>
You should have started by posting a message spelling it out: Konstantin implemented
a scheme that uses the cmake "export" function to find midas, mfe and manalyzer,
it is very nice and works ok, but it is non-standard/obsoleted/obscure/frowned-upon/
unpopular/I-do-not-like-it/I-did-not-invent-it, and I propose implementing a new scheme
based on find_package().
>
> - one shortcoming i see for new implementation is usage `midas::` namespace
> (mentioned above) that may have broken some setups
>
If you think that your changes will break other people code, you should explicitely
say this in a message to this forum and hopefully provide instruction on fixing it,
i.e. in your makefile, please replace "midas" with "midas::midas".
>
> - `find_package` is standard and recommended way of finding packages
>
Do you have a reference for this? When I look at cmake documentation, I do not see
any specific recommendation on creating packages and finding them. I do see
other people's code for finding packages and often spend hours fighting
them because said methods are designed to work only on the developer's laptop.
P.S. Did anybody ask Ben to update the MidasWiki documentation with the new find_package() information?
K.O. |
23 Mar 2025, Alexandr Kozlinskiy, Suggestion, improved find_package behaviour for Midas
|
> > > > currently to link Midas to project one has to do several steps ...
> > > this information is incorrect. please read https://daq00.triumf.ca/elog-midas/Midas/2258
> >
> > I admit that i did not see your post about targets import
> > via `include($ENV{MIDASSYS}/lib/midas-targets.cmake)`
> > before implementing changes to cmake scripts.
> >
> > But in this respect the way you propose to do it via `include` should still work.
> >
>
> I proposed nothing, you did the proposing. I spent many hours trying to understand cmake (mission
> impossible!) and many more hours to implement the previously existing package scheme based
> on the cmake "EXPORT" function.
I agree that cmake is difficult, especially when it comes to creating cmake scripts for library
that should work for other people (as opposed to just using other libraries).
But that is why we should try to follow recommended way of using it.
>
> > Note however that `include(...)` way is very unusual as one have to know exactly
> > where `...-targets.cmake` is located and standard way in cmake is via `find_package`
> > (similar to how e.g. ROOT, Geant4, etc. are found and linked).
>
> Very difficult to cut-and-paste "include($ENV{MIDASSYS}/lib/midas-targets.cmake)".
>
> You cannot simplify-out $ENV{MIDASSYS} because computer cannot read your mind, which of the 10 copies
> of midas you want to use from which user account on which day.
One can use `find_package(Midas PATH $ENV{MIDASSYS})` to set specific location of Midas
(this is mentioned in https://bitbucket.org/tmidas/midas/pull-requests/48)
and without `PATH` argument the default system/user locations are searched.
>
> Argument about "very unusual" I would buy, I am not a cmake expert and I do not know which package
> finding method is in favour today.
>
> >
> > Though i again admit that maybe the namespace change was a bit too much as it may
> > have broken previous users of `include($ENV{MIDASSYS}/lib/midas-targets.cmake)`
> >
>
> I believe it did break at least one experiment, after updating MIDAS to latest version,
> the analyzer would not build.
This unfortunately was not easy to avoid in this case, as both midas and manalyzer depend on each other:
midas should compile when manalyzer is enabled (as submodule)
and manalyzer should compile with midas as separate lib.
So in this case i would expect both midas and manalyzer be updated at same time to their matching versions.
>
> Speaking of which, did you implement your new scheme for the manalyzer so that it works
> in standalone mode (without MIDAS)?
The only change on manalyzer i did was to use `find_package(Midas ...)` and `midas::midas` target
(see https://bitbucket.org/tmidas/manalyzer/commits/b219a916).
If there is interest to use same scheme in manalyzer as in Midas i can implement it.
>
> If you did not, now we have two schemes, your new scheme just for MIDAS and my old scheme
> for manalyzer *and* MIDAS. xkcd 927.
>
> >
> > - shortcomings of what was before is usage of non-standard `include(...)`
> >
>
> You should have started by posting a message spelling it out: Konstantin implemented
> a scheme that uses the cmake "export" function to find midas, mfe and manalyzer,
> it is very nice and works ok, but it is non-standard/obsoleted/obscure/frowned-upon/
> unpopular/I-do-not-like-it/I-did-not-invent-it, and I propose implementing a new scheme
> based on find_package().
As i mentioned i did not see you original post about usage of `include`
and otherwise i may have referenced it and though more about compatibility issues.
>
> >
> > - one shortcoming i see for new implementation is usage `midas::` namespace
> > (mentioned above) that may have broken some setups
> >
>
> If you think that your changes will break other people code, you should explicitely
> say this in a message to this forum and hopefully provide instruction on fixing it,
> i.e. in your makefile, please replace "midas" with "midas::midas".
In the original message to this thread i posted reference to PR
(https://bitbucket.org/tmidas/midas/pull-requests/48)
where it shows how to use `find_package` with this change.
As i did not expect the direct use of `include()` form
and assumed that manual linking was used (via specifying include/lib paths/names)
some scenarios where code for people broke were missed (not taken into account) by me.
>
> >
> > - `find_package` is standard and recommended way of finding packages
> >
>
> Do you have a reference for this? When I look at cmake documentation, I do not see
> any specific recommendation on creating packages and finding them. I do see
> other people's code for finding packages and often spend hours fighting
> them because said methods are designed to work only on the developer's laptop.
see https://cmake.org/cmake/help/v3.27/guide/importing-exporting/index.html:
- about use of `find_package` see https://cmake.org/cmake/help/latest/guide/using-dependencies/index.html#guide:Using%20Dependencies%20Guide
- about double colon namespace for target see https://cmake.org/cmake/help/v3.27/guide/importing-exporting/index.html
where it is mentioned "This convention of double-colons gives CMake a hint that the name is an IMPORTED target when it is used by downstream projects".
>
> P.S. Did anybody ask Ben to update the MidasWiki documentation with the new find_package() information?
>
> K.O. |
25 Mar 2025, Konstantin Olchanski, Suggestion, improved find_package behaviour for Midas
|
> https://cmake.org/cmake/help/latest/guide/using-dependencies/index.html#guide:Using%20Dependencies%20Guide
thank you for providing a link to latest cmake find_package() guide.
I notice that this documentation was added in cmake 3.24 released circa Nov 2022
and does not exist in older versions. (It is easy to see in git history the last
time I touched any cmake stuff in midas).
I still see no documentation on "this is how you write a package that other people can import
using find_package()". Only see documentation on how to use find_package() on packages
that somebody who somehow knows how to do it already wrote.
> In the original message to this thread i posted reference to PR
> (https://bitbucket.org/tmidas/midas/pull-requests/48)
this pull request was rail-roaded through during the holidays without any
discussion on this forum. I was not given an opportunity to comment to it,
it was pushed and merged faster than I could blink.
bottom line. I voted against using cmake and was over-ruled. To me this cmake stuff
is only a source of wasted time and created bad feelings.
if midas is required to use cmake, we should have somebody on the team that at least
understands it and if not love it, at least does not hate it.
K.O. |
28 Mar 2025, Konstantin Olchanski, Suggestion, improved find_package behaviour for Midas
|
I figured out the breakage, added a git tag to identify where the cmake incompatible change was made (roughly)
and posted a note on how to fix it. Please reimburse me for the 2 hours I had to spend on this instead of doing
useful work. K.O. |
11 Jul 2021, Konstantin Olchanski, Info, midas cmake update
|
I reworked the midas cmake files:
- install via CMAKE_INSTALL_PREFIX should work correctly now:
- installed are bin, lib and include - everything needed to build against the midas library
- if built without CMAKE_INSTALL_PREFIX, a special mode "MIDAS_NO_INSTALL_INCLUDE_FILES" is activated, and the include path
contains all the subdirectories need for compilation
- -I$MIDASSYS/include and -L$MIDASSYS/lib -lmidas work in both cases
- to "use" midas, I recommend: include($ENV{MIDASSYS}/lib/midas-targets.cmake)
- config files generated for find_package(midas) now have correct information (a manually constructed subset of information
automatically exported by cmake's install(export))
- people who want to use "find_package(midas)" will have to contribute documentation on how to use it (explain the magic used to
find the "right midas" in /usr/local/midas or in /midas or in ~/packages/midas or in ~/pacjages/new-midas) and contribute an
example superproject that shows how to use it and that can be run from the bitpucket automatic build. (features that are not part
of the automatic build we cannot insure against breakage).
On my side, here is an example of using include($ENV{MIDASSYS}/lib/midas-targets.cmake). I posted this before, it is used in
midas/examples/experiment and I will ask ben to include it into the midas wiki documentation.
Below is the complete cmake file for building the alpha-g event bnuilder and main control frontend. When presented like this, I
have to agree that cmake does provide positive value to the user. (the jury is still out whether it balances out against the
negative value in the extra work to "just support find_package(midas) already!").
#
# CMakeLists.txt for alpha-g frontends
#
cmake_minimum_required(VERSION 3.12)
project(agdaq_frontends)
include($ENV{MIDASSYS}/lib/midas-targets.cmake)
add_compile_options("-O2")
add_compile_options("-g")
#add_compile_options("-std=c++11")
add_compile_options(-Wall -Wformat=2 -Wno-format-nonliteral -Wno-strict-aliasing -Wuninitialized -Wno-unused-function)
add_compile_options("-DTMFE_REV0")
add_compile_options("-DOS_LINUX")
add_executable(feevb feevb.cxx TsSync.cxx)
target_link_libraries(feevb midas)
add_executable(fectrl fectrl.cxx GrifComm.cxx EsperComm.cxx JsonTo.cxx KOtcp.cxx $ENV{MIDASSYS}/src/tmfe_rev0.cxx)
target_link_libraries(fectrl midas)
#end |
28 Mar 2025, Konstantin Olchanski, Bug Fix, midas cmake update
|
MIDAS git tag midas-2025-01-a introduced an incompatible change to "include midas-targets.cmake". Instead of "midas" one now has to
say "midas::midas", as updated below. K.O.
>
> #
> # CMakeLists.txt for alpha-g frontends
> #
>
> cmake_minimum_required(VERSION 3.12)
> project(agdaq_frontends)
>
> include($ENV{MIDASSYS}/lib/midas-targets.cmake)
>
> add_compile_options("-O2")
> add_compile_options("-g")
> #add_compile_options("-std=c++11")
> add_compile_options(-Wall -Wformat=2 -Wno-format-nonliteral -Wno-strict-aliasing -Wuninitialized -Wno-unused-function)
> add_compile_options("-DTMFE_REV0")
> add_compile_options("-DOS_LINUX")
>
> add_executable(feevb feevb.cxx TsSync.cxx)
> target_link_libraries(feevb midas::midas)
>
> add_executable(fectrl fectrl.cxx GrifComm.cxx EsperComm.cxx JsonTo.cxx KOtcp.cxx $ENV{MIDASSYS}/src/tmfe_rev0.cxx)
> target_link_libraries(fectrl midas::midas)
>
> #end |
28 Mar 2025, Konstantin Olchanski, Info, mjsroot added
|
I need to look at histograms inside a ROOT file, but all the old ways for doing this no longer work. (in theory I can scp the ROOT file to
the computer I am sitting in front of, but this assumes I have a working ROOT there. anyhow it is pointless to fight this, all modern
packages are written to only work on the developer's laptop).
- root new TBrowser starts a web server, tries to open firefox (and fails)
- root --web=off new TBrowser using ssh X11 tunnel no longer works, ROOT X11 graphics refresh is broken
- macos root binary kit is built without X11 support, root --web=off does not work at all
- root7 recommended "rootssh" prints an error message (and fails)
What does work well is JSROOT which we use to look at manalyzer live histograms (through apache and mhttpd web proxies).
So I wrote mjsroot.exe. It opens a ROOT file and starts JSROOT to look at it (plus a bit of dancing around to make it actually work):
mjsroot.exe -R8082 root_output_files/output00371.root
To actually see the histograms:
a) if you sitting in front of the same computer, open http://localhost:8082
b) if you are somewhere else, start an ssh tunnel: ssh daq13 -L8082:localhost:8082, open http://localhost:8082
c) if daq13 is running mhttpd, setup http proxy:
set ODB /webserver/proxy/mjsroot to http://localhost:8082
open https://daq13.triumf.ca/proxy/mjsroot/
also
set ODB /alias/mjsroot to "/proxy/mjsroot/"
reload MIDAS status page, observe "mjsroot" in listed in the left-hand side, open it.
K.O.
|
12 Feb 2025, Mark Grimes, Forum, TMFeRpcHandlerInterface::HandleEndRun when running offline on a Midas file
|
Hi,
I have a manalyzer that uses a derived class of TMFeRpcHandlerInterface to communicate information to
Midas during online running. At the end of each run it saves out custom data in the
TMFeRpcHandlerInterface::HandleEndRun override. This works really well.
However, when I run offline on a Midas output file the HandleEndRun method is never called and my data is
never saved. Is this intentional? I understand that there is no point for the HandleBinaryRpc method offline,
but the other methods (HandleEndRun, HandleBeginRun etc) could serve a purpose. Or is it a conscious
choice to ignore all of TMFeRpcHandlerInterface when offline?
Thanks,
Mark. |
26 Feb 2025, Thomas Lindner, Forum, TMFeRpcHandlerInterface::HandleEndRun when running offline on a Midas file
|
Hi,
Sorry, we have been slammed with a couple projects on the TRIUMF side in the past weeks and haven't found time for a
response. I am hopeful that we will be able to answer this question (and the cache size question) within the next 10
days.
Again apologies,
Thomas
> Hi,
> I have a manalyzer that uses a derived class of TMFeRpcHandlerInterface to communicate information to
> Midas during online running. At the end of each run it saves out custom data in the
> TMFeRpcHandlerInterface::HandleEndRun override. This works really well.
> However, when I run offline on a Midas output file the HandleEndRun method is never called and my data is
> never saved. Is this intentional? I understand that there is no point for the HandleBinaryRpc method offline,
> but the other methods (HandleEndRun, HandleBeginRun etc) could serve a purpose. Or is it a conscious
> choice to ignore all of TMFeRpcHandlerInterface when offline?
>
> Thanks,
>
> Mark. |
20 Mar 2025, Konstantin Olchanski, Forum, TMFeRpcHandlerInterface::HandleEndRun when running offline on a Midas file
|
> I have a manalyzer that uses a derived class of TMFeRpcHandlerInterface to communicate information to
> Midas during online running. At the end of each run it saves out custom data in the
> TMFeRpcHandlerInterface::HandleEndRun override. This works really well.
> However, when I run offline on a Midas output file the HandleEndRun method is never called and my data is
> never saved. Is this intentional? I understand that there is no point for the HandleBinaryRpc method offline,
> but the other methods (HandleEndRun, HandleBeginRun etc) could serve a purpose. Or is it a conscious
> choice to ignore all of TMFeRpcHandlerInterface when offline?
apologies for delayed response.
I saw the question, completely did not understand it, only now got around to figure out what is going on.
according to manalyzer/README.md, section "manalyzer module and object life time", BeginRun() and EndRun() is called
always. Offline and online. What you see would be a bug that we do not see in our environment. I confirm this by
running manalyzer in a demo mode: ./bin/manalyzer_test.exe --demo -e10 -t
no, wait, you say you use HandleBeginRun() and HandleEndRun(). this is not right, they are not part of the manalyzer
API and they indeed are only used when running online.
correct solution would be to use BeginRun() and EndRun() instead of HandleBeginRun() and HandleEndRun().
you could also save your data in the module destructor (although good programming recommendation is to use
destructor only for unavoidable things, like freeing memory, etc).
K.O. |
25 Mar 2025, Mark Grimes, Forum, TMFeRpcHandlerInterface::HandleEndRun when running offline on a Midas file
|
Hi,
The question was about the TMFeRpcHandlerInterface, not the TARunObject interface. Derived classes of TARunObject do indeed work as expected in our
environment. We have worked around the issue by using an implementation of TARunObject as well as the (separate) implementation of
TMFeRpcHandlerInterface.
Thanks,
Mark.
> > I have a manalyzer that uses a derived class of TMFeRpcHandlerInterface to communicate information to
> > Midas during online running. At the end of each run it saves out custom data in the
> > TMFeRpcHandlerInterface::HandleEndRun override. This works really well.
> > However, when I run offline on a Midas output file the HandleEndRun method is never called and my data is
> > never saved. Is this intentional? I understand that there is no point for the HandleBinaryRpc method offline,
> > but the other methods (HandleEndRun, HandleBeginRun etc) could serve a purpose. Or is it a conscious
> > choice to ignore all of TMFeRpcHandlerInterface when offline?
>
> apologies for delayed response.
>
> I saw the question, completely did not understand it, only now got around to figure out what is going on.
>
> according to manalyzer/README.md, section "manalyzer module and object life time", BeginRun() and EndRun() is called
> always. Offline and online. What you see would be a bug that we do not see in our environment. I confirm this by
> running manalyzer in a demo mode: ./bin/manalyzer_test.exe --demo -e10 -t
>
> no, wait, you say you use HandleBeginRun() and HandleEndRun(). this is not right, they are not part of the manalyzer
> API and they indeed are only used when running online.
>
> correct solution would be to use BeginRun() and EndRun() instead of HandleBeginRun() and HandleEndRun().
>
> you could also save your data in the module destructor (although good programming recommendation is to use
> destructor only for unavoidable things, like freeing memory, etc).
>
> K.O. |
25 Mar 2025, Konstantin Olchanski, Forum, TMFeRpcHandlerInterface::HandleEndRun when running offline on a Midas file
|
> The question was about the TMFeRpcHandlerInterface, not the TARunObject interface. Derived classes of TARunObject do indeed work as expected in our
> environment. We have worked around the issue by using an implementation of TARunObject as well as the (separate) implementation of
> TMFeRpcHandlerInterface.
then I do not understand the question. TMFeRpcHandlerInterface stuff is only used when running online and connected to MIDAS. How does it come into the
picture when you analyze a data file offline? ProcessMidasOnlineTmfe() does not run, the RpcHandler object is not constructed.
maybe if you point me to your source code, I can see what you are doing?
K.O. |
26 Mar 2025, Mark Grimes, Forum, TMFeRpcHandlerInterface::HandleEndRun when running offline on a Midas file
|
This was exactly the question, should I expect it to run? There's no point in the HandleBinaryRpc method offline, but there's an argument that the HandleBeginRun/HandleEndRun methods have a use.
I have the answer and we have a workaround, thanks.
> then I do not understand the question. TMFeRpcHandlerInterface stuff is only used when running online and connected to MIDAS. How does it come into the
> picture when you analyze a data file offline? ProcessMidasOnlineTmfe() does not run, the RpcHandler object is not constructed.
>
> maybe if you point me to your source code, I can see what you are doing?
>
> K.O. |
28 Mar 2025, Konstantin Olchanski, Forum, TMFeRpcHandlerInterface::HandleEndRun when running offline on a Midas file
|
I do not understand what you are doing. If you are offline, there is no TMFE singleton instance,
there is nothing TMFeRpcHandlerInterface to attach to, there is nobody to call TMFeRpcHandlerInterface methods.
Maybe what you are asking for is a mode where you analyze data from a file, but you want your analysis code
to think that it is online and that it is analyzing live data. This requires creating a fake TMFE singleton, attaching
TMFeRpcHandlerInterface to this fake TMFE singleton and using ProcessMidasOnlineTmfe(), driven by all this fake stuff:
a fake OpenBuffer() that actually opens a file, a fake ReceiveEvent() that actually reads from a file, fake callbacks
for begin and end run, etc.
That's a lot of work, but for what purpose? What is it about the existing offline and online modes that you do not like
and how all this fake stuff will make it better for you?
P.S. This is a 3rd version of my reply. Wrote and deleted 2 version. I think I completely do not understand
what you are doing and you completely do not understand what I am saying. Communication is not happening.
P.P.S. Simplest if you show me your code (email, elog), I am quite good at reading code and divining what
people are trying to do. You do not have to show me any of your secret secret stuff.
K.O.
> This was exactly the question, should I expect it to run? There's no point in the HandleBinaryRpc method offline, but there's an argument that the HandleBeginRun/HandleEndRun methods have a use.
> I have the answer and we have a workaround, thanks.
>
> > then I do not understand the question. TMFeRpcHandlerInterface stuff is only used when running online and connected to MIDAS. How does it come into the
> > picture when you analyze a data file offline? ProcessMidasOnlineTmfe() does not run, the RpcHandler object is not constructed.
> >
> > maybe if you point me to your source code, I can see what you are doing?
> >
> > K.O. |
19 Feb 2025, Lukas Gerritzen, Bug Report, Default write cache size for new equipments breaks compatibility with older equipments
|
We have a frontend for slow control with a lot of legacy code. I wanted to add a new equipment using the
mdev_mscb class. It seems like the default write cache size is 1000000B now, which produces error
messages like this:
12:51:20.154 2025/02/19 [SC Frontend,ERROR] [mfe.cxx:620:register_equipment,ERROR] Write cache size mismatch for buffer "SYSTEM": equipment "Environment" asked for 0, while eqiupment "LED" asked for 10000000
12:51:20.154 2025/02/19 [SC Frontend,ERROR] [mfe.cxx:620:register_equipment,ERROR] Write cache size mismatch for buffer "SYSTEM": equipment "LED" asked for 10000000, while eqiupment "Xenon" asked for 0
I can manually change the write cache size in /Equipment/LED/Common/Write cache size to 0. However, if I delete the LED tree in the ODB, then I get the same problems again. It would be nice if I could either choose the size as 0 in the frontend code, or if the defaults were compatible with our legacy code.
The commit that made the write cache size configurable seems to be from 2019: https://bitbucket.org/tmidas/midas/commits/3619ecc6ba1d29d74c16aa6571e40920018184c0 |
24 Feb 2025, Stefan Ritt, Bug Report, Default write cache size for new equipments breaks compatibility with older equipments
|
The commit that introduced the write cache size check is https://bitbucket.org/tmidas/midas/commits/3619ecc6ba1d29d74c16aa6571e40920018184c0
Unfortunately K.O. added the write cache size to the equipment list, but there is currently no way to change this programmatically from the user frontend code. The options I see are
1) Re-arrange the equipment settings so that the write case size comes to the end of the list which the user initializes, like
{"Trigger", /* equipment name */
{1, 0, /* event ID, trigger mask */
"SYSTEM", /* event buffer */
EQ_POLLED, /* equipment type */
0, /* event source */
"MIDAS", /* format */
TRUE, /* enabled */
RO_RUNNING | /* read only when running */
RO_ODB, /* and update ODB */
100, /* poll for 100ms */
0, /* stop run after this event limit */
0, /* number of sub events */
0, /* don't log history */
"", "", "", "", "", 0, 0},
read_trigger_event, /* readout routine */
10000000, /* write cache size */
},
2) Add a function fe_set_write_case(int size); which goes through the local equipment list and sets the cache size for all equipments to be the same.
I would appreciate some guidance from K.O. who introduced that code above.
/Stefan |
20 Mar 2025, Konstantin Olchanski, Bug Report, Default write cache size for new equipments breaks compatibility with older equipments
|
I think I added the cache size correctly:
{"Trigger", /* equipment name */
{1, 0, /* event ID, trigger mask */
"SYSTEM", /* event buffer */
EQ_POLLED, /* equipment type */
0, /* event source */
"MIDAS", /* format */
TRUE, /* enabled */
RO_RUNNING | /* read only when running */
RO_ODB, /* and update ODB */
100, /* poll for 100ms */
0, /* stop run after this event limit */
0, /* number of sub events */
0, /* don't log history */
"", "", "", "", "", // frontend_host, name, file_name, status, status_color
0, // hidden
0 // write_cache_size <<--------------------- set this to zero -----------
},
}
K.O. |
20 Mar 2025, Konstantin Olchanski, Bug Report, Default write cache size for new equipments breaks compatibility with older equipments
|
the main purpose of the event buffer write cache is to prevent high contention for the
event buffer shared memory semaphore in the pathological case of very high rate of very
small events.
there is a computation for this, I have posted it here several times, please search for
it.
in the nutshell, you want the semaphore locking rate to be around 10/sec, 100/sec
maximum. coupled with smallest event size and maximum practical rate (1 MHz), this
yields the cache size.
for slow control events generated at 1 Hz, the write cache is not needed,
write_cache_size value 0 is the correct setting.
for "typical" physics events generated at 1 kHz, write cache size should be set to fit
10 events (100 Hz semaphore locking rate) to 100 events (10 Hz semaphore locking rate).
unfortunately, one cannot have two cache sizes for an event buffer, so typical frontends
that generate physics data at 1 kHz and scalers and counters at 1 Hz must have a non-
zero write cache size (or semaphore locking rate will be too high).
the other consideration, we do not want data to sit in the cache "too long", so the
cache is flushed every 1 second or so.
all this cache stuff could be completely removed, deleted. result would be MIDAS that
works ok for small data sizes and rates, but completely falls down at 10 Gige speeds and
rates.
P.S. why is high semaphore locking rate bad? it turns out that UNIX and Linux semaphores
are not "fair", they do not give equal share to all users, and (for example) an event
buffer writer can "capture" the semaphore so the buffer reader (mlogger) will never get
it, a pathologic situation (to help with this, there is also a "read cache"). Read this
discussion: https://stackoverflow.com/questions/17825508/fairness-setting-in-semaphore-
class
K.O. |
20 Mar 2025, Konstantin Olchanski, Bug Report, Default write cache size for new equipments breaks compatibility with older equipments
|
> the main purpose of the event buffer write cache
how to control the write cache size:
1) in a frontend, all equipments should ask for the same write cache size, both mfe.c and
tmfe frontends will complain about mismatch
2) tmfe c++ frontend, per tmfe.md, set fEqConfWriteCacheSize in the equipment constructor, in
EqPreInitHandler() or EqInitHandler(), or set it in ODB. default value is 10 Mbytes or value
of MIN_WRITE_CACHE_SIZE define. periodic cache flush period is 0.5 sec in
fFeFlushWriteCachePeriodSec.
3) mfe.cxx frontend, set it in the equipment definition (number after "hidden"), set it in
ODB, or change equipment[i].write_cache_size. Value 0 sets the cache size to
MIN_WRITE_CACHE_SIZE, 10 Mbytes.
4) in bm_set_cache_size(), acceptable values are 0 (disable the cache), MIN_WRITE_CACHE_SIZE
(10 Mbytes) or anything bigger. Attempt to set the cache smaller than 10 Mbytes will set it
to 10 Mbytes and print an error message.
All this is kind of reasonable, as only two settings of write cache size are useful: 0 to
disable it, and 10 Mbytes to limit semaphore locking rate to reasonable value for all event
rate and size values practical on current computers.
In mfe.cxx it looks to be impossible to set the write cache size to 0 (disable it), but
actually all you need is call "bm_set_cache_size(equipment[0].buffer_handle, 0, 0);" in
frontend_init() (or is it in begin_of_run()?).
K.O. |
20 Mar 2025, Konstantin Olchanski, Bug Report, Default write cache size for new equipments breaks compatibility with older equipments
|
> > the main purpose of the event buffer write cache
> how to control the write cache size:
OP provided insufficient information to say what went wrong for them, but do try this:
1) in ODB, for all equipments, set write_cache_size to 0
2) in the frontend equipment table, set write_cache_size to 0
That is how it is done in the example frontend: examples/experiment/frontend.cxx
If this configuration still produces an error, we may have a bug somewhere, so please let us know how it shakes out.
K.O. |
21 Mar 2025, Stefan Ritt, Bug Report, Default write cache size for new equipments breaks compatibility with older equipments
|
> All this is kind of reasonable, as only two settings of write cache size are useful: 0 to
> disable it, and 10 Mbytes to limit semaphore locking rate to reasonable value for all event
> rate and size values practical on current computers.
Indeed KO is correct that only 0 and 10MB make sense, and we cannot mix it. Having the cache setting in the equipment table is
cumbersome. If you have 10 slow control equipment (cache size zero), you need to add many zeros at the end of 10 equipment
definitions in the frontend.
I would rather implement a function or variable similar to fEqConfWriteCacheSize in the tmfe framework also in the mfe.cxx
framework, then we need only to add one line llike
gEqConfWriteCacheSize = 0;
in the frontend.cxx file and this will be used for all equipments of that frontend. If nobody complains, I will do that in April when I'm
back from Japan.
Stefan |
25 Mar 2025, Konstantin Olchanski, Bug Report, Default write cache size for new equipments breaks compatibility with older equipments
|
> > All this is kind of reasonable, as only two settings of write cache size are useful: 0 to
> > disable it, and 10 Mbytes to limit semaphore locking rate to reasonable value for all event
> > rate and size values practical on current computers.
>
> Indeed KO is correct that only 0 and 10MB make sense, and we cannot mix it. Having the cache setting in the equipment table is
> cumbersome. If you have 10 slow control equipment (cache size zero), you need to add many zeros at the end of 10 equipment
> definitions in the frontend.
>
> I would rather implement a function or variable similar to fEqConfWriteCacheSize in the tmfe framework also in the mfe.cxx
> framework, then we need only to add one line llike
>
> gEqConfWriteCacheSize = 0;
>
> in the frontend.cxx file and this will be used for all equipments of that frontend. If nobody complains, I will do that in April when I'm
> back from Japan.
Cache size is per-buffer. If different equipments write into different event buffers, should be possible to set different cache sizes.
Perhaps have:
set_write_cache_size("SYSTEM", 0);
set_write_cache_size("BUF1", bigsize);
with an internal std::map<std::string,size_t>; for write cache size for each named buffer
K.O. |
|