Back Midas Rome Roody Rootana
  Midas DAQ System, Page 133 of 150  Not logged in ELOG logo
ID Date Author Topicdown Subject
  2948   11 Mar 2025 Ben SmithBug Reportpython hist_get_events not returning events, but javascript does
> Valid events are:
> Enter event name:
> 
> was printed out, which signified that no events were found. No errors were displayed.

I can't reproduce this. I made a brand new experiment, started mlogger/mhttpd/fetest, then ran the same program. I get:

```
$ python basic_hist_script.py
Valid events are:
* Run transitions
* test_slow/data
Enter event name: 
```

Are you sure you ran the python program after running mlogger and not before? Can you try again after restarting mlogger? And can you verify that your python is connecting to the correct experiment if you have multiple experiments defined?

I tested with python 3.12.8 and 3.13.1, and am on MacOS 14.5, but I can't imagine those differences matter.

The python interface is a trivial wrapper around the C++ function, so the only python-specific thing that would result in an empty list is extracting an integer from a ctypes reference. If that's broken in your version then I don't think any of the midas python code would be working.
  2949   14 Mar 2025 Konstantin OlchanskiBug Reportpython hist_get_events not returning events, but javascript does
> After starting midas (mhttpd &, and mlogger -D) and running the `fetest` frontend I went into the midas/python/examples directory and ran basic_hist_script.py, and, even though I could see the 'pytest' program in the Programs page,
> 
> Valid events are:
> Enter event name:
> 
> was printed out, which signified that no events were found. No errors were displayed.

To check that MIDAS itself is built correctly, you can try "make test", this will create a sample experiment,
run fetest, start, stop a run and check that data file and history file is created with correct history events.

If "make test" fails, I can help debug it.

In your experiment, you can check that history files are created correctly:

1) "mhist -l" should show all available events
2) "mhdump -L *.hst" should show all events in the .hst history files
3) if you have the newer mhf*.dat files, you can "more mhf_1449770978_20151210_hv.dat" to see what data is inside

If all of that works as expected, there must be a problem with the python side and we will have to figure
out how to reproduce it.

This reminds me, "make test" does not test any of the python code, it should be added (and python should be added
to the bitbucket builds).

K.O.
  2951   16 Mar 2025 Federico RezzonicoBug Reportpython hist_get_events not returning events, but javascript does
> After starting midas (mhttpd &, and mlogger -D) and running the `fetest` frontend I went into the midas/python/examples directory and ran basic_hist_script.py, and, even though I could see the 'pytest' program in the Programs page,
> 
> Valid events are:
> Enter event name:
> 
> was printed out, which signified that no events were found. No errors were displayed.
> 
> Instead, when trying to do the same in javascript (using mjsonrpc_send_request( mjsonrpc_make_request("hs_get_events")).then(console.log)), I was able to get the expected events.
> 
> The History page also displayed the expected data and the plots worked correctly.
> 
> Device info: Chip: Apple M1 Pro, OS: Sequoia (15.3)
> 
> MIDAS version: bitbucket commit 84c7ef7
> 
> Python version: 3.13.2


I tested the command this morning and it worked. The most likely cause of the errors was that I was on a beta version of macOS: Switching beta updates off fixed the issue. Thanks for the help!
  2952   17 Mar 2025 Federico RezzonicoBug Reportpython hist_get_recent_data returns no historical data
Setup:
setting up midas, starting mhttpd and mlogger and running fetest.

The History page and the javascript mjsonrpc client are both able to fetch historical data for test_slow/data. Javascript code used is included here:

mjsonrpc_call(
  "hs_read_arraybuffer",
  {
    start_time: Math.floor((new Date()).getTime() /1000) - 1000,
    end_time: Math.floor((new Date()).getTime() /1000),
    events: ["test_slow/data"],
    tags: ["data"],
    index: [0],
  },
  "arraybuffer"
).then(console.log)

However, the python client does not find any valid events:

Setup:
An exptab is created and the environment variables MIDAS_EXPTAB and MIDAS_EXPT_NAME and MIDASSYS are set (together with the correct PATH)

Running /midas/python/examples/basic_hist_script.py and typing in data:

Valid events are:
* Run transitions
* rrandom/SLOW
* test_slow/data
Enter event name: test_slow/data
Valid tags for test_slow/data are:
* data
Enter tag name: data
Event/tag test_slow/data/data has 1 elements
How many hours: 1
Interval in seconds: 1 # other values were also tested, without success
0 entries found

We expect entries to be found, however do not.

Tested setups:
Macbook Pro Sequoia 15.3 with Python 3.13.2, ROOT latest, midas bitbucket commit 84c7ef7
Windows 11 with Python 3.11, ROOT latest, midas latest commit (development branch)
  2953   17 Mar 2025 Ben SmithBug Reportpython hist_get_recent_data returns no historical data
Unfortunately I again cannot reproduce this:

$ python ~/DAQ/midas_latest/python/examples/basic_hist_script.py
Valid events are:
* Run transitions
* test_slow/data
Enter event name: test_slow/data
Valid tags for test_slow/data are:
* data
Enter tag name: data
Event/tag test_slow/data/data has 1 elements
How many hours: 1
Interval in seconds: 1
78 entries found
2025/03/17 17:00:56 => 98.097391
2025/03/17 17:00:57 => 98.982151
2025/03/17 17:00:58 => 99.589187
2025/03/17 17:00:59 => 99.926821
2025/03/17 17:01:00 => 99.989878
2025/03/17 17:01:01 => 99.778216
2025/03/17 17:01:02 => 99.292485
.......


I want to narrow down whether the issue is in the basic_hist_script.py or the lower-level code. So there are a few steps of debugging to do.



1) Run code directly in the python interpreter: 

Can you run the following and send the output please?

```
import midas.client
c = midas.client.MidasClient("history_test")
data = c.hist_get_recent_data(1,1,"test_slow/data","data")
print(f"event_name='{data[0]['event_name']}', tag_name='{data[0]['tag_name']}', num_entries={data[0]['num_entries']}, status={data[0]['status']}, arrlen={len(data[0]['values'])}")
```

For me, I get:
event_name='test_slow/data', tag_name='data', num_entries=441, status=1, arrlen=441



2) If things look sensible for you (status=1, non-zero num_entries), then the problem is in the basic_hist_script.py. Can you add the same print() statement in basic_hist_script.py immediately after the call to hist_get_recent_data(), then run that script again and send the output of that?



3) Debug the python/C conversions.

In midas/client.py add the following line to hist_get_data() immediately before the call to self.lib.c_hs_read():

```
        print(f"c_start_time={c_start_time.value}, c_end_time={c_end_time.value}, c_interval={c_interval.value}, c_event_name={c_event_name.value}, c_tag_name={c_tag_name.value}")
```

Then run the following and send the output:

```
import midas.client
c = midas.client.MidasClient("history_test")
data = c.hist_get_recent_data(1,1,"test_slow/data","data")
```

For me, I get:
c_start_time=1742254428, c_end_time=1742258028, c_interval=1, c_event_name=b'test_slow/data', c_tag_name=b'data'

I want to check that the UNIX timestamps match what you expect for your server, and that nothing weird is going on with the python/C string conversions.


Thanks,
Ben
  Draft   17 Mar 2025 Federico RezzonicoBug Reportpython hist_get_recent_data returns no historical data
> Unfortunately I again cannot reproduce this:
> 
> $ python ~/DAQ/midas_latest/python/examples/basic_hist_script.py
> Valid events are:
> * Run transitions
> * test_slow/data
> Enter event name: test_slow/data
> Valid tags for test_slow/data are:
> * data
> Enter tag name: data
> Event/tag test_slow/data/data has 1 elements
> How many hours: 1
> Interval in seconds: 1
> 78 entries found
> 2025/03/17 17:00:56 => 98.097391
> 2025/03/17 17:00:57 => 98.982151
> 2025/03/17 17:00:58 => 99.589187
> 2025/03/17 17:00:59 => 99.926821
> 2025/03/17 17:01:00 => 99.989878
> 2025/03/17 17:01:01 => 99.778216
> 2025/03/17 17:01:02 => 99.292485
> .......
> 
> 
> I want to narrow down whether the issue is in the basic_hist_script.py or the lower-level code. So there are a few steps of debugging to do.
> 
> 
> 
> 1) Run code directly in the python interpreter: 
> 
> Can you run the following and send the output please?
> 
> ```
> import midas.client
> c = midas.client.MidasClient("history_test")
> data = c.hist_get_recent_data(1,1,"test_slow/data","data")
> print(f"event_name='{data[0]['event_name']}', tag_name='{data[0]['tag_name']}', num_entries={data[0]['num_entries']}, status={data[0]['status']}, arrlen={len(data[0]['values'])}")
> ```
> 
> For me, I get:
> event_name='test_slow/data', tag_name='data', num_entries=441, status=1, arrlen=441
> 
> 
> 
> 2) If things look sensible for you (status=1, non-zero num_entries), then the problem is in the basic_hist_script.py. Can you add the same print() statement in basic_hist_script.py immediately after the call to hist_get_recent_data(), then run that script again and send the output of that?
> 
> 
> 
> 3) Debug the python/C conversions.
> 
> In midas/client.py add the following line to hist_get_data() immediately before the call to self.lib.c_hs_read():
> 
> ```
>         print(f"c_start_time={c_start_time.value}, c_end_time={c_end_time.value}, c_interval={c_interval.value}, c_event_name={c_event_name.value}, c_tag_name={c_tag_name.value}")
> ```
> 
> Then run the following and send the output:
> 
> ```
> import midas.client
> c = midas.client.MidasClient("history_test")
> data = c.hist_get_recent_data(1,1,"test_slow/data","data")
> ```
> 
> For me, I get:
> c_start_time=1742254428, c_end_time=1742258028, c_interval=1, c_event_name=b'test_slow/data', c_tag_name=b'data'
> 
> I want to check that the UNIX timestamps match what you expect for your server, and that nothing weird is going on with the python/C string conversions.
> 
> 
> Thanks,
> Ben

Hi, thank you for the support!

1)
event_name='test_slow/data', tag_name='data', num_entries=0, status=1, arrlen=0
2)
The number of entries is zero
3)
I get
c_start_time=1742275653, c_end_time=1742279253, c_interval=1, c_event_name=b'test_slow/data', c_tag_name=b'data'
  2955   18 Mar 2025 Federico RezzonicoBug Reportpython hist_get_recent_data returns no historical data
> Unfortunately I again cannot reproduce this:
> 
> $ python ~/DAQ/midas_latest/python/examples/basic_hist_script.py
> Valid events are:
> * Run transitions
> * test_slow/data
> Enter event name: test_slow/data
> Valid tags for test_slow/data are:
> * data
> Enter tag name: data
> Event/tag test_slow/data/data has 1 elements
> How many hours: 1
> Interval in seconds: 1
> 78 entries found
> 2025/03/17 17:00:56 => 98.097391
> 2025/03/17 17:00:57 => 98.982151
> 2025/03/17 17:00:58 => 99.589187
> 2025/03/17 17:00:59 => 99.926821
> 2025/03/17 17:01:00 => 99.989878
> 2025/03/17 17:01:01 => 99.778216
> 2025/03/17 17:01:02 => 99.292485
> .......
> 
> 
> I want to narrow down whether the issue is in the basic_hist_script.py or the lower-level code. So there are a few steps of debugging to do.
> 
> 
> 
> 1) Run code directly in the python interpreter: 
> 
> Can you run the following and send the output please?
> 
> ```
> import midas.client
> c = midas.client.MidasClient("history_test")
> data = c.hist_get_recent_data(1,1,"test_slow/data","data")
> print(f"event_name='{data[0]['event_name']}', tag_name='{data[0]['tag_name']}', num_entries={data[0]['num_entries']}, status={data[0]['status']}, arrlen={len(data[0]['values'])}")
> ```
> 
> For me, I get:
> event_name='test_slow/data', tag_name='data', num_entries=441, status=1, arrlen=441
> 
> 
> 
> 2) If things look sensible for you (status=1, non-zero num_entries), then the problem is in the basic_hist_script.py. Can you add the same print() statement in basic_hist_script.py immediately after the call to hist_get_recent_data(), then run that script again and send the output of that?
> 
> 
> 
> 3) Debug the python/C conversions.
> 
> In midas/client.py add the following line to hist_get_data() immediately before the call to self.lib.c_hs_read():
> 
> ```
>         print(f"c_start_time={c_start_time.value}, c_end_time={c_end_time.value}, c_interval={c_interval.value}, c_event_name={c_event_name.value}, c_tag_name={c_tag_name.value}")
> ```
> 
> Then run the following and send the output:
> 
> ```
> import midas.client
> c = midas.client.MidasClient("history_test")
> data = c.hist_get_recent_data(1,1,"test_slow/data","data")
> ```
> 
> For me, I get:
> c_start_time=1742254428, c_end_time=1742258028, c_interval=1, c_event_name=b'test_slow/data', c_tag_name=b'data'
> 
> I want to check that the UNIX timestamps match what you expect for your server, and that nothing weird is going on with the python/C string conversions.
> 
> 
> Thanks,
> Ben

Hi, thank you for the support!

Running the commands on the Macbook pro leads to
1)
event_name='test_slow/data', tag_name='data', num_entries=0, status=1, arrlen=0
2)
The number of entries is zero
3)
I get
c_start_time=1742275653, c_end_time=1742279253, c_interval=1, c_event_name=b'test_slow/data', c_tag_name=b'data'

However right after running these commands I removed a .SHM_HOST.TXT file
(due to me working both at home and at PSI, my computer hostname changes when I switch the network, so I remove .SHM_HOST.TXT to be able to run experiments)

and reran the code, and it suddenly worked! This is good, but I do not know what fixed it... I had done more extensive tests yesterday and also had to delete .SHM_HOST.TXT multiple times, to no avail.
Do you have any ideas as to what could be happening? Similarly to my previous bug report, which was fixed by updating macOS to a more stable version, could this have been due to an automatic update?

If the problem still persists on the Windows machine I will post an update.
  2956   18 Mar 2025 Konstantin OlchanskiBug Reportpython hist_get_recent_data returns no historical data
>
> However right after running these commands I removed a .SHM_HOST.TXT file
>

Instead of deleting .SHM_HOSTS.TXT, please create it as an empty file. I thought the documentation is clear about it?

Also we recommend installing MIDAS to $HOME/packages/midas. There is a number of problems if installed at top level.

If you want to be compliant with the Linux LFS, /opt/midas is also a good place.

> 
> ... and it suddenly worked!
>

We still did not establish if mhist, mhdump and the other commands I sent you work correctly,
to confirm MIDAS is creating correct history files. (before you try to read them with python).

Also we did not establish that you have correct paths setup in ODB /Logger/History.

Many things can go wrong.

Next time python history malfunctions, please do all those other things and report to us. Thanks!

K.O.
  2960   19 Mar 2025 Konstantin OlchanskiBug Reportpython hist_get_events not returning events, but javascript does
> beta version of macOS: Switching beta updates off fixed the issue.

I would be very surprised if that was the problem.

Bigger concern is that it fails without producing any useful error message.

Latest MacOS makes it extremely difficult to debug this kind of stuff, there
is several hoops to jump through to enable core dumps and to allow lldb
to attach and debug running programs.

K.O.
  2963   20 Mar 2025 Konstantin OlchanskiBug Reportmidas equipment "format"
we are migrating the dragon experiment from an old mac to a new mac studio and we ran into a problem 
where one equipment format was set to "fixed" instead of "midas". lots of confusion, mdump crash, 
analyzer crash, etc. (mdump fixes for this are already committed).

it made us think whether equipment format is still needed. in the old days we had choice of MIDAS and 
YBOS formats, but YBOS was removed years ago, and I was surprised that format FIXED was permitted at 
all.

I did a midas source code review, this is what I found:

- remnants of YBOS support in a few places, commit to remove them pending.
- FORMAT_ROOT is used in mlogger for automatic conversion of MIDAS banks to ROOT trees
- FORMAT_FIXED is used in a few slow control drivers in drivers/class, instead of creating MIDAS 
banks, they copy raw data directly into an event (there is no bank header and no way to identify such 
events automatically)
- lots of code to support different formats in mdump (mostly dead code)
- the rest of the code does not care or use this format stuff

Current proposal is to remove support for all formats except FORMAT_MIDAS (and FORMAT_ROOT in 
mlogger).

- defines of FORMAT_XXX will be removed from midas.h
- "Format" will be removed from ODB Equipment/Common
- "Format" will be removed from ODB Logger/Channel
- to maintain binary compatibility, we can keep the "Format" ODB entries, but they will be ignored.

List of slow control drivers that support FORMAT_FIXED:

daq00:midas$ grep FORMAT_FIXED drivers/class/*
drivers/class/cd_fdg.cxx:   if (fgd_info->format == FORMAT_FIXED) {
drivers/class/cd_ivc32.cxx:   if (hv_info->format == XFORMAT_FIXED) {
drivers/class/cd_rf.cxx:	if (rf_info->format == XFORMAT_FIXED) 
drivers/class/generic.cxx:   if (gen_info->format == XFORMAT_FIXED) {
drivers/class/hv.cxx:   if (hv_info->format == XFORMAT_FIXED) {
drivers/class/multi.cxx:   if (m_info->format == XFORMAT_FIXED) {
drivers/class/slowdev.cxx:   if (gen_info->format == XFORMAT_FIXED) {
daq00:midas$ 

K.O.
  2965   20 Mar 2025 Konstantin OlchanskiBug Reportplease fix compiler warning
Unnamed person who added this clever bit of c++ coding, please fix this compiler warning. Stock g++ on Ubuntu LTS 24.04. Thanks in advance!

/home/olchansk/git/midas/src/system.cxx: In function ‘std::string ss_execs(const char*)’:
/home/olchansk/git/midas/src/system.cxx:2256:43: warning: ignoring attributes on template argument ‘int (*)(FILE*)’ [-Wignored-attributes]
 2256 |    std::unique_ptr<FILE, decltype(&pclose)> pipe(popen(cmd, "r"), pclose);

K.O.
  2966   20 Mar 2025 Konstantin OlchanskiBug Reportplease fix mscb compiler warning
I am getting a scary compiler warning from MSCB code. I generally avoid touching MSCB code because I have no MSCB hardware to test it, so I am 
asking the persons unnamed who wrote this code to fix it. Stock g++ on Ubuntu LTS 24.04. Thanks in advance!

In function ‘ssize_t read(int, void*, size_t)’,
    inlined from ‘int mscb_interprete_file(const char*, unsigned char**, unsigned int*, unsigned char**, unsigned int*, unsigned char*)’ at 
/home/olchansk/git/midas/mscb/src/mscb.cxx:2666:13:
/usr/include/x86_64-linux-gnu/bits/unistd.h:28:10: warning: ‘ssize_t __read_alias(int, void*, size_t)’ specified size 18446744073709551614 
exceeds maximum object size 9223372036854775807 [-Wstringop-overflow=]
   28 |   return __glibc_fortify (read, __nbytes, sizeof (char),
      |          ^~~~~~~~~~~~~~~
/usr/include/x86_64-linux-gnu/bits/unistd-decl.h: In function ‘int mscb_interprete_file(const char*, unsigned char**, unsigned int*, unsigned 
char**, unsigned int*, unsigned char*)’:
/usr/include/x86_64-linux-gnu/bits/unistd-decl.h:29:16: note: in a call to function ‘ssize_t __read_alias(int, void*, size_t)’ declared with 
attribute ‘access (write_only, 2, 3)’
   29 | extern ssize_t __REDIRECT_FORTIFY (__read_alias, (int __fd, void *__buf,
      |                ^~~~~~~~~~~~~~~~~~

K.O.
  2972   20 Mar 2025 Konstantin OlchanskiBug ReportDefault write cache size for new equipments breaks compatibility with older equipments
I think I added the cache size correctly:

  {"Trigger",               /* equipment name */
      {1, 0,                 /* event ID, trigger mask */
         "SYSTEM",           /* event buffer */
         EQ_POLLED,          /* equipment type */
         0,                  /* event source */
         "MIDAS",            /* format */
         TRUE,               /* enabled */
         RO_RUNNING |        /* read only when running */
         RO_ODB,             /* and update ODB */
         100,                /* poll for 100ms */
         0,                  /* stop run after this event limit */
         0,                  /* number of sub events */
         0,                  /* don't log history */
         "", "", "", "", "", // frontend_host, name, file_name, status, status_color
         0, // hidden
         0  // write_cache_size <<--------------------- set this to zero -----------
      },
   }

K.O.
  2973   20 Mar 2025 Konstantin OlchanskiBug ReportDefault write cache size for new equipments breaks compatibility with older equipments
the main purpose of the event buffer write cache is to prevent high contention for the 
event buffer shared memory semaphore in the pathological case of very high rate of very 
small events.

there is a computation for this, I have posted it here several times, please search for 
it.

in the nutshell, you want the semaphore locking rate to be around 10/sec, 100/sec 
maximum. coupled with smallest event size and maximum practical rate (1 MHz), this 
yields the cache size.

for slow control events generated at 1 Hz, the write cache is not needed, 
write_cache_size value 0 is the correct setting.

for "typical" physics events generated at 1 kHz, write cache size should be set to fit 
10 events (100 Hz semaphore locking rate) to 100 events (10 Hz semaphore locking rate).

unfortunately, one cannot have two cache sizes for an event buffer, so typical frontends 
that generate physics data at 1 kHz and scalers and counters at 1 Hz must have a non-
zero write cache size (or semaphore locking rate will be too high).

the other consideration, we do not want data to sit in the cache "too long", so the 
cache is flushed every 1 second or so.

all this cache stuff could be completely removed, deleted. result would be MIDAS that 
works ok for small data sizes and rates, but completely falls down at 10 Gige speeds and 
rates.

P.S. why is high semaphore locking rate bad? it turns out that UNIX and Linux semaphores 
are not "fair", they do not give equal share to all users, and (for example) an event 
buffer writer can "capture" the semaphore so the buffer reader (mlogger) will never get 
it, a pathologic situation (to help with this, there is also a "read cache"). Read this 
discussion: https://stackoverflow.com/questions/17825508/fairness-setting-in-semaphore-
class

K.O.
  2974   20 Mar 2025 Konstantin OlchanskiBug ReportDefault write cache size for new equipments breaks compatibility with older equipments
> the main purpose of the event buffer write cache

how to control the write cache size:

1) in a frontend, all equipments should ask for the same write cache size, both mfe.c and 
tmfe frontends will complain about mismatch

2) tmfe c++ frontend, per tmfe.md, set fEqConfWriteCacheSize in the equipment constructor, in 
EqPreInitHandler() or EqInitHandler(), or set it in ODB. default value is 10 Mbytes or value 
of MIN_WRITE_CACHE_SIZE define. periodic cache flush period is 0.5 sec in 
fFeFlushWriteCachePeriodSec.

3) mfe.cxx frontend, set it in the equipment definition (number after "hidden"), set it in 
ODB, or change equipment[i].write_cache_size. Value 0 sets the cache size to 
MIN_WRITE_CACHE_SIZE, 10 Mbytes.

4) in bm_set_cache_size(), acceptable values are 0 (disable the cache), MIN_WRITE_CACHE_SIZE 
(10 Mbytes) or anything bigger. Attempt to set the cache smaller than 10 Mbytes will set it 
to 10 Mbytes and print an error message.

All this is kind of reasonable, as only two settings of write cache size are useful: 0 to 
disable it, and 10 Mbytes to limit semaphore locking rate to reasonable value for all event 
rate and size values practical on current computers.

In mfe.cxx it looks to be impossible to set the write cache size to 0 (disable it), but 
actually all you need is call "bm_set_cache_size(equipment[0].buffer_handle, 0, 0);" in 
frontend_init() (or is it in begin_of_run()?).

K.O.
  2975   20 Mar 2025 Konstantin OlchanskiBug ReportDefault write cache size for new equipments breaks compatibility with older equipments
> > the main purpose of the event buffer write cache
> how to control the write cache size:

OP provided insufficient information to say what went wrong for them, but do try this:

1) in ODB, for all equipments, set write_cache_size to 0
2) in the frontend equipment table, set write_cache_size to 0

That is how it is done in the example frontend: examples/experiment/frontend.cxx

If this configuration still produces an error, we may have a bug somewhere, so please let us know how it shakes out.

K.O.
  2977   20 Mar 2025 Konstantin OlchanskiBug Reportmanalyzer module init order problem
Andrea Capra reported a problem with manalyzer module initialization order.

Original manalyzer design relies on the fact than module static constructors run in the same order as they 
appear on the linker command line. This has been true for a very long time, but now we have evidence that on 
Rocky Linux (unknown gcc version), this is no longer true.

The c++ standard does not define any specific order for static constructor in different source files. 
(unlike C, C++ tends to treat the linker as something magical and ask the linker to do magical things).

The tmfe c++ modular frontend was designed after manalyzer and one design change is to explicitly construct 
all objects from main(). (at the acceptable cost of extra boiler plate code).

One solution is to use GCC attribute "init_priority (priority)", see
https://stackoverflow.com/questions/211237/static-variables-initialisation-order
https://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Attributes.html

To use this, manalyzer module registration should be modified to read:

static TARegister tar __attribute__((init_priority(200))) (new ExampleCxxFactory);

A less magical solution is to change manalyzer design similar to tmfe where modules are constructed and 
registered explicitely in the order specified by source code.

K.O.

P.S. I now ran out of time to test this and commit it to the documentation. It also need to be tested with 
LLVM C++ compilers.
  2978   21 Mar 2025 Jonas A. KriegerBug Reportmidas equipment "format"
Hi Konstantin,

In the PSI muSR laboratory, we are running about 140 slow control devices across six instruments using Format FIXED.

Could you please wait a bit with removing support for it so that we can assess if/how this will affect us?

Many thanks,
Jonas

> we are migrating the dragon experiment from an old mac to a new mac studio and we ran into a problem 
> where one equipment format was set to "fixed" instead of "midas". lots of confusion, mdump crash, 
> analyzer crash, etc. (mdump fixes for this are already committed).
> 
> it made us think whether equipment format is still needed. in the old days we had choice of MIDAS and 
> YBOS formats, but YBOS was removed years ago, and I was surprised that format FIXED was permitted at 
> all.
> 
> I did a midas source code review, this is what I found:
> 
> - remnants of YBOS support in a few places, commit to remove them pending.
> - FORMAT_ROOT is used in mlogger for automatic conversion of MIDAS banks to ROOT trees
> - FORMAT_FIXED is used in a few slow control drivers in drivers/class, instead of creating MIDAS 
> banks, they copy raw data directly into an event (there is no bank header and no way to identify such 
> events automatically)
> - lots of code to support different formats in mdump (mostly dead code)
> - the rest of the code does not care or use this format stuff
> 
> Current proposal is to remove support for all formats except FORMAT_MIDAS (and FORMAT_ROOT in 
> mlogger).
> 
> - defines of FORMAT_XXX will be removed from midas.h
> - "Format" will be removed from ODB Equipment/Common
> - "Format" will be removed from ODB Logger/Channel
> - to maintain binary compatibility, we can keep the "Format" ODB entries, but they will be ignored.
> 
> List of slow control drivers that support FORMAT_FIXED:
> 
> daq00:midas$ grep FORMAT_FIXED drivers/class/*
> drivers/class/cd_fdg.cxx:   if (fgd_info->format == FORMAT_FIXED) {
> drivers/class/cd_ivc32.cxx:   if (hv_info->format == XFORMAT_FIXED) {
> drivers/class/cd_rf.cxx:	if (rf_info->format == XFORMAT_FIXED) 
> drivers/class/generic.cxx:   if (gen_info->format == XFORMAT_FIXED) {
> drivers/class/hv.cxx:   if (hv_info->format == XFORMAT_FIXED) {
> drivers/class/multi.cxx:   if (m_info->format == XFORMAT_FIXED) {
> drivers/class/slowdev.cxx:   if (gen_info->format == XFORMAT_FIXED) {
> daq00:midas$ 
> 
> K.O.
  2979   21 Mar 2025 Stefan RittBug ReportDefault write cache size for new equipments breaks compatibility with older equipments
 > All this is kind of reasonable, as only two settings of write cache size are useful: 0 to 
> disable it, and 10 Mbytes to limit semaphore locking rate to reasonable value for all event 
> rate and size values practical on current computers.

Indeed KO is correct that only 0 and 10MB make sense, and we cannot mix it. Having the cache setting in the equipment table is 
cumbersome. If you have 10 slow control equipment (cache size zero), you need to add many zeros at the end of 10 equipment 
definitions in the frontend. 

I would rather implement a function or variable similar to fEqConfWriteCacheSize in the tmfe framework also in the mfe.cxx 
framework, then we need only to add one line llike

gEqConfWriteCacheSize = 0;

in the frontend.cxx file and this will be used for all equipments of that frontend. If nobody complains, I will do that in April when I'm 
back from Japan.

Stefan
  2985   21 Mar 2025 Stefan RittBug Reportplease fix compiler warning
> Unnamed person who added this clever bit of c++ coding, please fix this compiler warning. Stock g++ on Ubuntu LTS 24.04. Thanks in advance!
> 
> /home/olchansk/git/midas/src/system.cxx: In function ‘std::string ss_execs(const char*)’:
> /home/olchansk/git/midas/src/system.cxx:2256:43: warning: ignoring attributes on template argument ‘int (*)(FILE*)’ [-Wignored-attributes]
>  2256 |    std::unique_ptr<FILE, decltype(&pclose)> pipe(popen(cmd, "r"), pclose);
> 
> K.O.

Replace the code with:

   auto pclose_deleter = [](FILE* f) { pclose(f); };
   auto pipe = std::unique_ptr<FILE, decltype(pclose_deleter)>(
      popen(cmd, "r"),
      pclose_deleter
   );


Hope this is now warning-free.

Stefan
ELOG V3.1.4-2e1708b5