01 Oct 2025, Frederik Wauters, Forum, struct size mismatch of alarms
|
So I started our DAQ with an updated midas, after ca. 6 months+.
No issues except all FEs complaining about the Alarm ODB structure.
* I adapted to the new structure ( trigger count & trigger count required )
* restarted fe's
* recompiled
18:17:40.015 2025/09/30 [EPICS Frontend,INFO] Fixing ODB "/Alarms/Alarms/logger"
struct size mismatch (expected 452, odb size 460)
18:17:40.009 2025/09/30 [SC Frontend,INFO] Fixing ODB "/Alarms/Alarms/logger"
struct size mismatch (expected 460, odb size 452)
how do I get the FEs + ODB back in line here?
thanks |
01 Oct 2025, Nick Hastings, Forum, struct size mismatch of alarms
|
> So I started our DAQ with an updated midas, after ca. 6 months+.
Would be worthwhile mentioning the git commit hash or tag you are using.
> No issues except all FEs complaining about the Alarm ODB structure.
> * I adapted to the new structure ( trigger count & trigger count required )
> * restarted fe's
> * recompiled
>
> 18:17:40.015 2025/09/30 [EPICS Frontend,INFO] Fixing ODB "/Alarms/Alarms/logger"
> struct size mismatch (expected 452, odb size 460)
>
> 18:17:40.009 2025/09/30 [SC Frontend,INFO] Fixing ODB "/Alarms/Alarms/logger"
> struct size mismatch (expected 460, odb size 452)
This seems to be https://daq00.triumf.ca/elog-midas/Midas/2980
> how do I get the FEs + ODB back in line here?
Recompile all frontends against new midas.
Nick. |
01 Oct 2025, Nick Hastings, Forum, struct size mismatch of alarms
|
Just to be clear, it seems that your "EPICS Frontend" was either not recompiled against the new midas yet or the old binary is being run, but "SC Frontend" is using the new midas.
> > So I started our DAQ with an updated midas, after ca. 6 months+.
>
> Would be worthwhile mentioning the git commit hash or tag you are using.
>
> > No issues except all FEs complaining about the Alarm ODB structure.
> > * I adapted to the new structure ( trigger count & trigger count required )
> > * restarted fe's
> > * recompiled
> >
> > 18:17:40.015 2025/09/30 [EPICS Frontend,INFO] Fixing ODB "/Alarms/Alarms/logger"
> > struct size mismatch (expected 452, odb size 460)
> >
> > 18:17:40.009 2025/09/30 [SC Frontend,INFO] Fixing ODB "/Alarms/Alarms/logger"
> > struct size mismatch (expected 460, odb size 452)
>
> This seems to be https://daq00.triumf.ca/elog-midas/Midas/2980
>
> > how do I get the FEs + ODB back in line here?
>
> Recompile all frontends against new midas.
>
> Nick. |
02 Oct 2025, Stefan Ritt, Forum, struct size mismatch of alarms
|
Sorry to intervene there, but the FEs are usually compiled against libmidas.a . Therefore you have to compile midas, usually do a "make install" to update the libmidas.a/so, then recompile the FEs. You probably forgot the "make install".
Stefan |
02 Oct 2025, Frederik Wauters, Forum, struct size mismatch of alarms
|
> Sorry to intervene there, but the FEs are usually compiled against libmidas.a . Therefore you have to compile midas, usually do a "make install" to update the libmidas.a/so, then recompile the FEs. You probably forgot the "make install".
>
> Stefan
OK, solved, closed. Turned out I messed up rebuilding some of the FEs.
More generally, and with the "Documentation" discussion of the workshop in mind, ODB mismatch error messages of all kind are a recurring phenomena confusing users. And MIDASGPT gave complete wrong suggestions. |
17 Sep 2025, Mark Grimes, Suggestion, Get manalyzer to configure midas::odb when running offline
|
Hi,
Lots of users like the midas::odb interface for reading from the ODB in manalyzers. It currently doesn't
work offline however without a few manual lines to tell midas::odb to read from the ODB copy in the run
header. The code also gets a bit messy to work out the current filename and get midas::odb to reopen the
file currently being processed. This would be much cleaner if manalyzer set this up automatically, and then
user code could be written that is completely ignorant of whether it is running online or offline.
The change I suggest is in the `set_offline_odb` branch, commit 4ffbda6, which is simply:
diff --git a/manalyzer.cxx b/manalyzer.cxx
index 371f135..725e1d2 100644
--- a/manalyzer.cxx
+++ b/manalyzer.cxx
@@ -15,6 +15,7 @@
#include "manalyzer.h"
#include "midasio.h"
+#include "odbxx.h"
//////////////////////////////////////////////////////////
@@ -2075,6 +2076,8 @@ static int ProcessMidasFiles(const std::vector<std::string>& files, const std::v
if (!run.fRunInfo) {
run.CreateRun(runno, filename.c_str());
run.fRunInfo->fOdb = MakeFileDumpOdb(event->GetEventData(), event->data_size);
+ // Also set the source for midas::odb in case people prefer that interface
+ midas::odb::set_odb_source(midas::odb::STRING, std::string(event->GetEventData(), event-
>data_size));
run.BeginRun();
}
It happens at the point where the ODB record is already available and requires no effort from the user to
be able to read the ODB offline.
Thanks,
Mark. |
17 Sep 2025, Konstantin Olchanski, Suggestion, Get manalyzer to configure midas::odb when running offline
|
> Lots of users like the midas::odb interface for reading from the ODB in manalyzers.
> +#include "odbxx.h"
This is a useful improvement. Before commit of this patch, can you confirm the RunInfo destructor
deletes this ODB stuff from odbxx? manalyzer takes object life times very seriously.
There is also the issue that two different RunInfo objects would load two different ODB dumps
into odbxx. (inability to access more than 1 ODB dump is a design feature of odbxx).
This is not an actual problem in manalyzer because it only processes one run at a time
and only 1 or 0 RunInfo objects exists at any given time.
Of course with this patch extending manalyzer to process two or more runs at the same time becomes impossible.
K.O. |
18 Sep 2025, Mark Grimes, Suggestion, Get manalyzer to configure midas::odb when running offline
|
> ....Before commit of this patch, can you confirm the RunInfo destructor
> deletes this ODB stuff from odbxx? manalyzer takes object life times very seriously.
The call stores the ODB string in static members of the midas::odb class. So these will have a lifetime of the process or until they're replaced by another
call. When a midas::odb is instantiated it reads from these static members and then that data has the lifetime of that instance.
> Of course with this patch extending manalyzer to process two or more runs at the same time becomes impossible.
Yes, I hadn't realised that was an option. For that to work I guess the aforementioned static members could be made thread local storage, and
processing of each run kept to a specific thread. Although I could imagine user code making assumptions and breaking, like storing a midas::odb as a
class member or something.
Note that I missed doing the same for the end of run event, which should probably also be added.
Thanks,
Mark. |
18 Sep 2025, Stefan Ritt, Suggestion, Get manalyzer to configure midas::odb when running offline
|
> > Of course with this patch extending manalyzer to process two or more runs at the same time becomes impossible.
>
> Yes, I hadn't realised that was an option. For that to work I guess the aforementioned static members could be made thread local storage, and
> processing of each run kept to a specific thread. Although I could imagine user code making assumptions and breaking, like storing a midas::odb as a
> class member or something.
If we want to analyze several runs, I can easily add code to make this possible. In a new call to set_odb_source(), the previously allocated memory in that function can be freed. We can aldo make the memory handling
thread-specific, allowing several thread to analyze different runs at the same time. But I will only invest work there once it's really needed by someone.
Stefan |
22 Sep 2025, Stefan Ritt, Suggestion, Get manalyzer to configure midas::odb when running offline
|
I will work today on the odbxx API to make sure there are no memory leaks when you switch form one file to another. I talked to KO so he agreed that yo then commit your proposed change of manalyzer
Best,
Stefan |
22 Sep 2025, Konstantin Olchanski, Suggestion, Get manalyzer to configure midas::odb when running offline
|
> I will work today on the odbxx API to make sure there are no memory leaks when you switch form one file to another. I talked to KO so he agreed that yo then commit your proposed change of manalyzer
That, and add a "clear()" method that resets odbxx state to "empty". I will call odbxx.clear() everywhere where I call "delete fOdb;" (TARunInfo::dtor and other places).
K.O. |
22 Sep 2025, Stefan Ritt, Suggestion, Get manalyzer to configure midas::odb when running offline
|
> > I will work today on the odbxx API to make sure there are no memory leaks when you switch form one file to another. I talked to KO so he agreed that yo then commit your proposed change of manalyzer
>
> That, and add a "clear()" method that resets odbxx state to "empty". I will call odbxx.clear() everywhere where I call "delete fOdb;" (TARunInfo::dtor and other places).
No need for clear(), since no memory gets allocated by midas::odd::set_odb_source(). All it does is to remember the file name. When you instantiate a midas::odd object, the file gets loaded, and the midas::odd object gets initialized from the file contents. Then the buffer
gets deleted (actually it's a simple local variable). Of course this causes some overhead (each midas::odd() constructor reads the whole file), but since the OS will cache the file, it's probably not so bad.
Stefan |
26 Sep 2025, Mark Grimes, Suggestion, Get manalyzer to configure midas::odb when running offline
|
> ...I talked to KO so he agreed that yo then commit your proposed change of manalyzer
Merged and pushed.
Thanks,
Mark. |
22 Sep 2025, Konstantin Olchanski, Suggestion, Get manalyzer to configure midas::odb when running offline
|
> > ....Before commit of this patch, can you confirm the RunInfo destructor
> > deletes this ODB stuff from odbxx? manalyzer takes object life times very seriously.
>
> The call stores the ODB string in static members of the midas::odb class. So these will have a lifetime of the process or until they're replaced by another
> call. When a midas::odb is instantiated it reads from these static members and then that data has the lifetime of that instance.
this is the behavious we need to modify.
> > Of course with this patch extending manalyzer to process two or more runs at the same time becomes impossible.
> Yes, I hadn't realised that was an option.
It is an option I would like to keep open. Not too many use cases, but imagine a "split brain" experiment
that has two MIDAS instances record data into two separate midas files. (if LIGO were to use MIDAS,
consider LIGO Hanford and LIGO Livingston).
Assuming data in these two data sets have common precision timestamps,
our task is to assemble data from two input files into single physics events. The analyzer will need
to read two input files, each file with it's run number, it's own ODB dump, etc, process the midas
events (unpack, calibrate, filter, etc), look at the timestamps, assemble the data into physics events.
This trivially generalizes into reading 2, 3, or more input files.
> For that to work I guess the aforementioned static members could be made thread local storage, and
> processing of each run kept to a specific thread. Although I could imagine user code making assumptions and breaking, like storing a midas::odb as a
> class member or something.
manalyzer is already multithreaded, if you will need to keep track of which thread should see which odbxx global object,
seems like abuse of the thread-local storage idea and intent.
> Note that I missed doing the same for the end of run event, which should probably also be added.
Ideally, the memory sanitizer will flag this for us, complain about anything that odbxx.clear() failes to free.
K.O. |
22 Sep 2025, Stefan Ritt, Suggestion, Get manalyzer to configure midas::odb when running offline
|
> > > Of course with this patch extending manalyzer to process two or more runs at the same time becomes impossible.
> > Yes, I hadn't realised that was an option.
>
> It is an option I would like to keep open. Not too many use cases, but imagine a "split brain" experiment
> that has two MIDAS instances record data into two separate midas files. (if LIGO were to use MIDAS,
> consider LIGO Hanford and LIGO Livingston).
>
> Assuming data in these two data sets have common precision timestamps,
> our task is to assemble data from two input files into single physics events. The analyzer will need
> to read two input files, each file with it's run number, it's own ODB dump, etc, process the midas
> events (unpack, calibrate, filter, etc), look at the timestamps, assemble the data into physics events.
>
> This trivially generalizes into reading 2, 3, or more input files.
>
> > For that to work I guess the aforementioned static members could be made thread local storage, and
> > processing of each run kept to a specific thread. Although I could imagine user code making assumptions and breaking, like storing a midas::odb as a
> > class member or something.
>
> manalyzer is already multithreaded, if you will need to keep track of which thread should see which odbxx global object,
> seems like abuse of the thread-local storage idea and intent.
I made the global variables storing the file name of type "thread_local", so each thread gets it's own copy. This means however that each thread must then call midas::odb::set_odb_source() individually before
creating any midas::odb objects. Interestingly enough I just learned that thread_local (at least under linux) is almost zero overhead, since these variable are placed by the linker into a separate memory space which is
separate for each thread, so accessing them only means to add a memory offset.
Let's see how far we get with this...
Stefan |
22 Sep 2025, Konstantin Olchanski, Info, obsolete mana.c removal
|
Following discussions at the MIDAS workshop and the proposed removal of support for ROOT, the very obsolete mana.c
analyzer framework has reached the end of the line.
Right now we cannot remember any experiment that uses a mana.c based analyzer. Most experiments use analyzers based
on the rootana package (developed for ALPHA-1 at CERN) and on the manalyzer package (developed for ALPHA-2 and ALPHA-
g at CERN, with multithreading support contributed by Joseph McKenna).
If you know of any experiment that uses a mana.c based analyzer, please let us know. We can help with building it
using an outside-of-midas local copy of mana.c or help with migration to a newer framework (or migration to a
framework-free standalone analyzer).
If we do not hear from anybody, we will remove mana.c (and rmana) at the same time as we remove ROOT support from
mlogger (rmlogger).
K.O. |
23 Sep 2025, Andreas Suter, Info, obsolete mana.c removal
|
Hi,
at the LEM Experiment at PSI, we still use mana.c and would like to keep it until end of 2026, where we will enter a long shutdown.
There we will switch to the manalyzer. Before that, there is simply no time for the change over. One thing I already noticed is the "lack" of documentation, since
for a lot of items I found simply "TBW". I know that writing documentation is boring and hard, but I hope at the time we will switch there is a more complete
documentation available.
Thanks a lot for the ongoing development and support
Andreas
> Following discussions at the MIDAS workshop and the proposed removal of support for ROOT, the very obsolete mana.c
> analyzer framework has reached the end of the line.
>
> Right now we cannot remember any experiment that uses a mana.c based analyzer. Most experiments use analyzers based
> on the rootana package (developed for ALPHA-1 at CERN) and on the manalyzer package (developed for ALPHA-2 and ALPHA-
> g at CERN, with multithreading support contributed by Joseph McKenna).
>
> If you know of any experiment that uses a mana.c based analyzer, please let us know. We can help with building it
> using an outside-of-midas local copy of mana.c or help with migration to a newer framework (or migration to a
> framework-free standalone analyzer).
>
> If we do not hear from anybody, we will remove mana.c (and rmana) at the same time as we remove ROOT support from
> mlogger (rmlogger).
>
> K.O. |
23 Sep 2025, Konstantin Olchanski, Info, obsolete mana.c removal
|
> Hi, at the LEM Experiment at PSI, we still use mana.c and would like to keep it until end of 2026, where we will enter a long shutdown.
Excellent, good to hear from you! Once we remove ROOT support rmana.o will be gone, only mana.o (no ROOT) will remain. Will this break your builds?
One solution could be to copy mana.c from MIDAS into your source tree and compile/link it from there (not from MIDAS).
Perhaps the way to proceed is create a test branch with ROOT and mana.c removed, you can try it,
report success/fail and we go from there.
We should schedule this work for when both of us have a block of free time to work on it.
K.O. |
24 Sep 2025, Andreas Suter, Info, obsolete mana.c removal
|
Sorry,
I have had now the time to dig deeper in our code and realized that we actually use rmana, i.e. WITH ROOT. If there is an easy way to incorporate the necessary parts temporarily to our side, we will do it. Without ROOT this would have been quite easy, with ROOT, I am not that sure. Anyhow, as said the timeline for this is only until end of 2026.
Andreas
> > Hi, at the LEM Experiment at PSI, we still use mana.c and would like to keep it until end of 2026, where we will enter a long shutdown.
>
> Excellent, good to hear from you! Once we remove ROOT support rmana.o will be gone, only mana.o (no ROOT) will remain. Will this break your builds?
>
> One solution could be to copy mana.c from MIDAS into your source tree and compile/link it from there (not from MIDAS).
>
> Perhaps the way to proceed is create a test branch with ROOT and mana.c removed, you can try it,
> report success/fail and we go from there.
>
> We should schedule this work for when both of us have a block of free time to work on it.
>
> K.O. |
24 Sep 2025, Thomas Lindner, Suggestion, Improve process for adding new variables that can be shown in history plots
|
Documenting a discussion I had with Konstantin a while ago.
One aspect of the MIDAS history plotting I find frustrating is the sequence for adding a new history
variable and then plotting them. At least for me ,the sequence for a new variable is:
1) Add a new named variable to a MIDAS bank; compile the frontend and restart it; check that the new
variable is being displayed correctly in MIDAS equipment pages.
2) Stop and restart programs; usually I do
- stop and restart mlogger
- stop and restart mhttpd
- stop and restart mlogger
- stop and restart mhttpd
3) Start adding the new variable to the history plots.
My frustration is with step 2, where the logger and web server need to be restarted multiple times.
I think that only one of those programs actually needs to be restart twice, but I can never remember
which one, so I restart both programs twice just to be sure.
I don't entirely understand the sequence of what happens with these restarts so that the web server
becomes aware of what new variables are in the history system; so I can't make a well-motivated
suggestion.
Ideally it would be nice if step 2 only required restarting the mlogger and mhttpd automatically
became aware of what new variables were in the system.
But even just having a single restart of mlogger, then mhttpd would be an improvement on the current
situation and easier to explain to users.
Hopefully this would not be a huge amount of work. |
22 Sep 2025, Konstantin Olchanski, Info, switch midas to c++17
|
Following discussions at the MIDAS workshop, we propose to move MIDAS from c++11 to c++17. There is
many new features and we want to start using some of them.
Per my previous message https://daq00.triumf.ca/elog-midas/Midas/3084,
c++17 is available on current MacOS, U-22 and newer, el9 and newer, D-12 and newer.
(ROOT moved to C++17 as of release 6.30 on November 6, 2023)
As I reported earlier, MIDAS already builds with c++23 on U-24, and this move does not require any
actual code changes other than a bump of c++ version in CMakeLists.txt and Makefile.
Please let us know if this change will cause problems or if you think that we should move to an older
c++ (c++14) or newer c++ (c++20 or c++23 or c++26).
If we do not hear anything, we will implement this change in about 2-3 weeks.
K.O. |
23 Sep 2025, Pavel Murat, Info, switch midas to c++17
|
perhaps c++20? - std::format is an immediately useful feature. --regards, Pasha
> Following discussions at the MIDAS workshop, we propose to move MIDAS from c++11 to c++17. There is
> many new features and we want to start using some of them.
>
> Per my previous message https://daq00.triumf.ca/elog-midas/Midas/3084,
> c++17 is available on current MacOS, U-22 and newer, el9 and newer, D-12 and newer.
>
> (ROOT moved to C++17 as of release 6.30 on November 6, 2023)
>
> As I reported earlier, MIDAS already builds with c++23 on U-24, and this move does not require any
> actual code changes other than a bump of c++ version in CMakeLists.txt and Makefile.
>
> Please let us know if this change will cause problems or if you think that we should move to an older
> c++ (c++14) or newer c++ (c++20 or c++23 or c++26).
>
> If we do not hear anything, we will implement this change in about 2-3 weeks.
>
> K.O. |
23 Sep 2025, Konstantin Olchanski, Info, switch midas to c++17
|
> perhaps c++20? - std::format is an immediately useful feature. --regards, Pasha
confirmed. std::format is an improvement over K&R C printf().
but seems unavailable on U-20 and older, requires --std=c++20 on U-24 and MacOS.
but also available as a standalone library: https://github.com/fmtlib/fmt
myself, I use printf() and msprintf(), I think std::format is in the "too little, too late to save C++"
department.
K.O. |
23 Sep 2025, Pavel Murat, Info, switch midas to c++17
|
> > perhaps c++20? - std::format is an immediately useful feature. --regards, Pasha
>
> confirmed. std::format is an improvement over K&R C printf().
>
> but seems unavailable on U-20 and older, requires --std=c++20 on U-24 and MacOS.
agreed! - availability is significantly more important. -- regards, Pasha |
23 Sep 2025, Konstantin Olchanski, Info, long history variable names
|
To record discussion with Stefan about long history variable names.
We have several requests to remove the 32-byte limit on history variable names.
Presently, history variable names are formed from two 32-byte strings: history event name and
history tag name:
* history event name is usually same as the equipment name (also a 32-byte string)
* history tag name is composed from /eq/xxx/variables/yyy name (also a 32-byte string) or from
names in /eq/xxx/variables/names and "names zzz", which can have arbitrary length (and tag name
would have to be truncated).
This worked well for "per-equipment" history, history events corresponded to equipment/variables
and all data from equipment/variables were written together in one go. (this very inefficient if
values of only one variable is updated).
Then at some point we implemented "per-variable" history:
* history event name is a equipment name (32-byte string) plus /eq/xxx/variables/vvv variable name
(also 32-byte string). (obviously truncation is quite possible)
* history tag name is unchanged (also can be truncated)
With "per-variable" history, history events correspond to individual variables (ODB entries) in
/eq/xxx/variables. If value of one variable is updated, only that variable is written to ODB. This
is much more efficient. (If variable is an array, the whole array is written, is variable is a
subdirectory, the whole subdirectory is written).
We considered even finer granularity, writing to history file only the one value that changed, but
decided against slicing the data too fine. (for arrays, MIDAS frontends usually update all values
of an array, as in "array of 10 temperatures" or "array of 4000 high voltages").
Many years later, we have the SQL history and the FILE history which do not have the 32-byte limit
on history event names and history tag names (no limit in the MIDAS C++ code. MySQL and PgSQL have
limits on table name and column name lengths, 64 bytes and 31 bytes respectively, best I can
tell).
But the API still uses the MIDAS "struct TAG" with the 32-byte tag name length limit.
It is pretty easy to change the API to use a new "struct TAG_CXX" with std::string unlimited-
length tag names, but the old MIDAS history will be unable to deal with long names. Hence
the discussion about removing the old MIDAS history and keeping only the FILE and SQL history
(plus the mhdump and mh2sql tools to convert old history files to the new formats).
(some code in mhttpd may need to be corrected for long history names. javascript code should be
okey, history plot code may need adjustment to display pathologically long names. use small font,
truncate, etc).
K.O. |
23 Sep 2025, Konstantin Olchanski, Info, 64-bit time_t
|
To record discussion with Stefan regarding 64-bit time_t
To remember:
signed 32-bit time_t will overflow in 2038 (soon enough)
unsigned 32-bit time_t will overflow in 2106 ("not my problem")
https://en.wikipedia.org/wiki/Year_2038_problem
https://wiki.debian.org/ReleaseGoals/64bit-time
64-bit Linux uses 64-bit time_t since as far back as el6.
MIDAS uses unsigned 32-bit (DWORD) time-in-seconds in many places (ODB, event
headers, event buffers, etc)
MIDAS also uses unsigned 32-bit (DWORD) time-in-milliseconds in many places.
All time arithmetic is done using unsigned 32-bit math, these time calculations
are good until year 2106, at which time they will wrap around
(but still work correctly).
So we do not need to do anything, but...
To reduce confusion between the different time types, we will probably
introduce a 32-bit-time-in-seconds data type (i.e. time32_t alias for uint32_t),
and a 32-bit-time-in-milliseconds data type (i.e. millitime32_t alias for
uint32_t). Also rename ss_time() to ss_time32() and ss_millitime() to
ss_millitime32().
This should help avoiding accidental mixing of MIDAS 32-bit time, system 64-bit
time and MIDAS 32-bit-time-in-milliseconds.
(confusion between time-in-seconds and time-in-milliseconds happened several
times in MIDAS code).
There will be additional discussion and announcements if we go ahead with these
changes.
K.O. |
22 Sep 2025, Konstantin Olchanski, Info, removal of ROOT support in mlogger
|
Historically, building MIDAS with ROOT caused us many problems - build failures because of c++ version
mismatch, CFLAGS mismatch; run-time failures because of ROOT library mismatches; etc, etc.
Following discussions at the MIDAS Workshop, we think we should finally bite the bullet and remove ROOT
support from MIDAS:
- remove support for writing data in ROOT TTree format in mlogger (rmlogger)
- remove support for ROOT in mana.c based analyzer (which itself we propose to remove)
- remove ROOT helper functions in rmidas.h
- remove ROOT support in cmake
- remove rmlogger and rmana
This change will not affect the rootana and manalyzer packages, they will continue to be built with
ROOT support if ROOT is available.
Right now, we cannot remember any experiment that uses the ROOT TTree output function in mlogger.
If you use this feature, we very much would like to hear from you. Please contact Stefan or myself or
reply to this message.
As replacement for rmlogger, we could implement identical or similar functionality using an manalyzer-
based custom logger, but we need at least one user who can test it for us and confirm that it works
correctly.
K.O. |
07 Feb 2025, Konstantin Olchanski, Info, switch midas to next c++
|
to continue where we left off in 2019,
https://daq00.triumf.ca/elog-midas/Midas/1520
time to choose the next c++!
snapshot of 2019:
- Linux RHEL/SL/CentOS6 - gcc 4.4.7, no C++11.
- Linux RHEL/SL/CentOS7 - gcc 4.8.5, full C++11, no C++14, no C++17
- Ubuntu 18.04.2 LTS - gcc 7.3.0, full C++11, full C++14, "experimental" C++17.
- MacOS 10.13 - llvm 10.0.0 (clang-1000.11.45.5), full C++11, full C++14, full C++17
the world moved on:
- el6/SL6 is gone
- el7/CentOS-7 is out the door, only two experiments on my plate (EMMA and ALPHA-g)
- el8 was a still born child of RedHat
- el9 - gcc 11.5 with 12, 13, and 14 available.
- el10 - gcc 14.2
- U-18 - gcc 7.5
- U-20 - gcc 9.4 default, 10.5 available
- U-22 - gcc 11.4 default, 12.3 available
- U-24 - gcc 13.3 default, 14.2 available
- MacOS 15.2 - llvm/clang 16
Next we read C++ level support:
(see here for GCC C++ support: https://gcc.gnu.org/projects/cxx-status.html)
(see here for LLVM clang c++ support: https://clang.llvm.org/cxx_status.html)
(see here for GLIBC c++ support: https://gcc.gnu.org/onlinedocs/libstdc++/manual/status.html)
gcc:
4.4.7 - no C++11
4.8.5 - full C++11, no C++14, no C++17
7.3.0 - full C++11, full C++14, "experimental" C++17.
7.5.0 - c++17 and older
9.4.0 - c++17 and older
10.5 - no c++26, no c++23, mostly c++20, c++17 and older
11.4 - no c++26, no c++23, full c++20 and older
12.3 - no c++26, mostly c++23, full c++20 and older
13.3 - no c++26, mostly c++23, full c++20 and older
14.2 - limited c++26, mostly c++23, full c++20 and older
clang:
16 - no c++26, mostly c++23, mostly c++20, full c++17 and older
I think our preference is c++23, the number of useful improvements is quite big.
This choice will limit us to:
- el9 or older
- U-22 or older
- current MacOS 15.2 with Xcode 16.
It looks like gcc and llvm support for c++23 is only partial, so obviously we will use a subset of c++23
supported by both.
Next step is to try to build midas with c++23 on el9 and U-22,24 and see what happens.
K.O. |
20 Mar 2025, Konstantin Olchanski, Info, switch midas to next c++
|
> time to choose the next c++!
Ununtu-24.04, MIDAS builds with -std=c++23 without errors or any additional warnings. (but does it work? "make
test" is ok).
K.O. |
22 Sep 2025, Konstantin Olchanski, Info, switch midas to next c++
|
As part of discussions with Stefan during the MIDAS workshop,
an update to supported versions of c++ on different platforms.
- el6/SL6 is gone, gcc 4.4.7, no c++11
- el7/CentOS-7 is out the door, gcc 4.8.5, full c++11, no c++14, no c++17
- el8 was a still born child of RedHat
- el9 - gcc 11.5 with 12, 13, and 14 available, incomplete c++26, c++23, experimental c++20, default c++17
- el10 - gcc 14.2, incomplete c++26, c++23, experimental c++20, default c++17
- U-18 - gcc 7.5, experimental, incomplete c++17
- U-20 - gcc 9.4 default, 10.5 available, experimental, incomplete c++17
- U-22 - gcc 11.4 default, 12.3 available, default c++17
- U-24 - gcc 13.3 default, 14.2 available, default c++17
- D-11 - gcc 10.2, experimental c++17
- D-12 - gcc 12.2, default c++17
- MacOS 15.2 - llvm/clang 16, default c++17
- MacOS 15.7 - llvm/clang 17, default c++17
Taken from C++ level support:
GCC C++ support: https://gcc.gnu.org/projects/cxx-status.html
LLVM clang c++ support: https://clang.llvm.org/cxx_status.html
GLIBC c++ support: https://gcc.gnu.org/onlinedocs/libstdc++/manual/status.html
This suggests that c++17 is available on all current platforms.
K.O.
P.S. ROOT moved to C++17 as of release 6.30 on November 6, 2023. |
16 Apr 2025, Thomas Lindner, Info, MIDAS workshop (online) Sept 22-23, 2025
|
Dear MIDAS enthusiasts,
We are planning a fifth MIDAS workshop, following on from previous successful
workshops in 2015, 2017, 2019 and 2023. The goals of the workshop include:
- Getting updates from MIDAS developers on new features, bug fixes and planned
changes.
- Getting reports from MIDAS users on how they are using MIDAS and what problems
they are facing.
- Making plans for future MIDAS changes and improvements
We are planning to have an online workshop on Sept 22-23, 2025 (it will coincide
with a visit of Stefan to TRIUMF). We are tentatively planning to have a four
hour session on each day, with the sessions timed for morning in Vancouver and
afternoon/evening in Europe. Sorry, the sessions are likely to again not be well
timed for our colleagues in Asia.
We will provide exact times and more details closer to the date. But I hope
people can mark the dates in their calendars; we are keen to hear from as much of
the MIDAS community as possible.
Best Regards,
Thomas Lindner |
03 Jun 2025, Thomas Lindner, Info, MIDAS workshop (online) Sept 22-23, 2025
|
Dear all,
We have setup an indico page for the MIDAS workshop on Sept 22-23. The page is here
https://indico.psi.ch/event/17580/overview
As I mentioned, we are keen to hear reports from any users or developers; we want to hear
how MIDAS is working for you and what improvements you would like to see. If you or your
experiment would like to give a talk about your MIDAS experiences then please submit an
abstract through the indico page.
Also, feel free to also register for the workshop (no fees). Registration is not
mandatory, but it would be useful for us to have an idea how many people will connect.
Thanks,
Thomas
> Dear MIDAS enthusiasts,
>
> We are planning a fifth MIDAS workshop, following on from previous successful
> workshops in 2015, 2017, 2019 and 2023. The goals of the workshop include:
>
> - Getting updates from MIDAS developers on new features, bug fixes and planned
> changes.
> - Getting reports from MIDAS users on how they are using MIDAS and what problems
> they are facing.
> - Making plans for future MIDAS changes and improvements
>
> We are planning to have an online workshop on Sept 22-23, 2025 (it will coincide
> with a visit of Stefan to TRIUMF). We are tentatively planning to have a four
> hour session on each day, with the sessions timed for morning in Vancouver and
> afternoon/evening in Europe. Sorry, the sessions are likely to again not be well
> timed for our colleagues in Asia.
>
> We will provide exact times and more details closer to the date. But I hope
> people can mark the dates in their calendars; we are keen to hear from as much of
> the MIDAS community as possible.
>
> Best Regards,
> Thomas Lindner |
08 Sep 2025, Thomas Lindner, Info, MIDAS workshop (online) Sept 22-23, 2025
|
Dear all,
A reminder we will have our MIDAS workshop starting two weeks from today (Sept 22-23). The
meeting will be in the morning in Vancouver, evening in Europe. A detailed schedule is available
here
https://indico.psi.ch/event/17580/timetable/#20250922.detailed
The zoom link is available from the indico overview page. The schedule will allow for a fair bit
of discussion time, so it is unlikely that talks will start exactly on time.
Looking forward to seeing people there.
Thomas
> Dear all,
>
> We have setup an indico page for the MIDAS workshop on Sept 22-23. The page is here
>
> https://indico.psi.ch/event/17580/overview
>
> As I mentioned, we are keen to hear reports from any users or developers; we want to hear
> how MIDAS is working for you and what improvements you would like to see. If you or your
> experiment would like to give a talk about your MIDAS experiences then please submit an
> abstract through the indico page.
>
> Also, feel free to also register for the workshop (no fees). Registration is not
> mandatory, but it would be useful for us to have an idea how many people will connect.
>
> Thanks,
> Thomas
>
>
> > Dear MIDAS enthusiasts,
> >
> > We are planning a fifth MIDAS workshop, following on from previous successful
> > workshops in 2015, 2017, 2019 and 2023. The goals of the workshop include:
> >
> > - Getting updates from MIDAS developers on new features, bug fixes and planned
> > changes.
> > - Getting reports from MIDAS users on how they are using MIDAS and what problems
> > they are facing.
> > - Making plans for future MIDAS changes and improvements
> >
> > We are planning to have an online workshop on Sept 22-23, 2025 (it will coincide
> > with a visit of Stefan to TRIUMF). We are tentatively planning to have a four
> > hour session on each day, with the sessions timed for morning in Vancouver and
> > afternoon/evening in Europe. Sorry, the sessions are likely to again not be well
> > timed for our colleagues in Asia.
> >
> > We will provide exact times and more details closer to the date. But I hope
> > people can mark the dates in their calendars; we are keen to hear from as much of
> > the MIDAS community as possible.
> >
> > Best Regards,
> > Thomas Lindner |
18 Sep 2025, Thomas Lindner, Info, MIDAS workshop (online) Sept 22-23, 2025
|
Dear all,
A final reminder that the MIDAS workshop will be occurring next Monday and Tuesday (at least in Europe
and the Americas). See the full schedule is posted here
https://indico.psi.ch/event/17580/timetable/#20250922.detailed
The zoom link is on the indico overview page.
Note that for those wanting to attend the workshop in person we will be meeting in ISAC2 room 223 at
TRIUMF.
See you there.
Thomas
> Dear all,
>
> A reminder we will have our MIDAS workshop starting two weeks from today (Sept 22-23). The
> meeting will be in the morning in Vancouver, evening in Europe. A detailed schedule is available
> here
>
> https://indico.psi.ch/event/17580/timetable/#20250922.detailed
>
> The zoom link is available from the indico overview page. The schedule will allow for a fair bit
> of discussion time, so it is unlikely that talks will start exactly on time.
>
> Looking forward to seeing people there.
>
> Thomas
>
> > Dear all,
> >
> > We have setup an indico page for the MIDAS workshop on Sept 22-23. The page is here
> >
> > https://indico.psi.ch/event/17580/overview
> >
> > As I mentioned, we are keen to hear reports from any users or developers; we want to hear
> > how MIDAS is working for you and what improvements you would like to see. If you or your
> > experiment would like to give a talk about your MIDAS experiences then please submit an
> > abstract through the indico page.
> >
> > Also, feel free to also register for the workshop (no fees). Registration is not
> > mandatory, but it would be useful for us to have an idea how many people will connect.
> >
> > Thanks,
> > Thomas
> >
> >
> > > Dear MIDAS enthusiasts,
> > >
> > > We are planning a fifth MIDAS workshop, following on from previous successful
> > > workshops in 2015, 2017, 2019 and 2023. The goals of the workshop include:
> > >
> > > - Getting updates from MIDAS developers on new features, bug fixes and planned
> > > changes.
> > > - Getting reports from MIDAS users on how they are using MIDAS and what problems
> > > they are facing.
> > > - Making plans for future MIDAS changes and improvements
> > >
> > > We are planning to have an online workshop on Sept 22-23, 2025 (it will coincide
> > > with a visit of Stefan to TRIUMF). We are tentatively planning to have a four
> > > hour session on each day, with the sessions timed for morning in Vancouver and
> > > afternoon/evening in Europe. Sorry, the sessions are likely to again not be well
> > > timed for our colleagues in Asia.
> > >
> > > We will provide exact times and more details closer to the date. But I hope
> > > people can mark the dates in their calendars; we are keen to hear from as much of
> > > the MIDAS community as possible.
> > >
> > > Best Regards,
> > > Thomas Lindner |
22 Sep 2025, Thomas Lindner, Info, MIDAS workshop (online) Sept 22-23, 2025
|
Dear all,
The original zoom link on the indico page was not correct. We have created a new zoom link and updated the
Indico overview page. Please refresh the Indico overview page to get the correct link.
Thomas
> Dear all,
>
> A final reminder that the MIDAS workshop will be occurring next Monday and Tuesday (at least in Europe
> and the Americas). See the full schedule is posted here
>
> https://indico.psi.ch/event/17580/timetable/#20250922.detailed
>
> The zoom link is on the indico overview page.
>
> Note that for those wanting to attend the workshop in person we will be meeting in ISAC2 room 223 at
> TRIUMF.
>
> See you there.
> Thomas
>
> > Dear all,
> >
> > A reminder we will have our MIDAS workshop starting two weeks from today (Sept 22-23). The
> > meeting will be in the morning in Vancouver, evening in Europe. A detailed schedule is available
> > here
> >
> > https://indico.psi.ch/event/17580/timetable/#20250922.detailed
> >
> > The zoom link is available from the indico overview page. The schedule will allow for a fair bit
> > of discussion time, so it is unlikely that talks will start exactly on time.
> >
> > Looking forward to seeing people there.
> >
> > Thomas
> >
> > > Dear all,
> > >
> > > We have setup an indico page for the MIDAS workshop on Sept 22-23. The page is here
> > >
> > > https://indico.psi.ch/event/17580/overview
> > >
> > > As I mentioned, we are keen to hear reports from any users or developers; we want to hear
> > > how MIDAS is working for you and what improvements you would like to see. If you or your
> > > experiment would like to give a talk about your MIDAS experiences then please submit an
> > > abstract through the indico page.
> > >
> > > Also, feel free to also register for the workshop (no fees). Registration is not
> > > mandatory, but it would be useful for us to have an idea how many people will connect.
> > >
> > > Thanks,
> > > Thomas
> > >
> > >
> > > > Dear MIDAS enthusiasts,
> > > >
> > > > We are planning a fifth MIDAS workshop, following on from previous successful
> > > > workshops in 2015, 2017, 2019 and 2023. The goals of the workshop include:
> > > >
> > > > - Getting updates from MIDAS developers on new features, bug fixes and planned
> > > > changes.
> > > > - Getting reports from MIDAS users on how they are using MIDAS and what problems
> > > > they are facing.
> > > > - Making plans for future MIDAS changes and improvements
> > > >
> > > > We are planning to have an online workshop on Sept 22-23, 2025 (it will coincide
> > > > with a visit of Stefan to TRIUMF). We are tentatively planning to have a four
> > > > hour session on each day, with the sessions timed for morning in Vancouver and
> > > > afternoon/evening in Europe. Sorry, the sessions are likely to again not be well
> > > > timed for our colleagues in Asia.
> > > >
> > > > We will provide exact times and more details closer to the date. But I hope
> > > > people can mark the dates in their calendars; we are keen to hear from as much of
> > > > the MIDAS community as possible.
> > > >
> > > > Best Regards,
> > > > Thomas Lindner |
17 Sep 2025, Mark Grimes, Bug Report, Midas no longer compiles on macOS
|
Hi,
The current develop branch no longer compiles on macOS. I get lots of errors of the form
/Users/me/midas/src/history_schema.cxx:740:4: error: unknown type name 'off64_t'; did you mean 'off_t'?
740 | off64_t fDataOffset = 0;
| ^~~~~~~
| off_t
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.5.sd
k/usr/include/sys/_types/_off_t.h:31:33: note: 'off_t' declared here
31 | typedef __darwin_off_t off_t;
| ^
There are also similar errors about lseek64. This appears to have come in with commit 9a6ad2e dated
23rd July, but I think it was merged into develop with commit 2beeca0 on 3rd of September.
Googling around it seems that off64_t is a GNU extension. I don't know of a cross platform solution but I'm
happy to test if someone has a suggestion.
Thanks,
Mark. |
17 Sep 2025, Konstantin Olchanski, Bug Report, Midas no longer compiles on macOS
|
> The current develop branch no longer compiles on macOS. I get lots of errors of the form
> /Users/me/midas/src/history_schema.cxx:740:4: error: unknown type name 'off64_t' ...
Confirmed. No idea why off64_t is missing on MacOS. I will try to fix it next week.
K.O. |
14 Aug 2025, Tam Kai Chung, Forum, How can I retrieve online data
|
Dear experts,
I would like to know how to retrieve the online data during the experiment so
that I can create my own custom plot. I execute my own frontend.exe to start the
experiment. I can get a midas file after the experiment, but I am not sure about
how to retrieve the online data. I know that rootana can help us to get the
online plots, but the instructions in rootana is not clear. Can anyone give me
some suggestion? Thank you.
Best,
Terry |
14 Aug 2025, Konstantin Olchanski, Forum, How can I retrieve online data
|
> I would like to know how to retrieve the online data during the experiment so
> that I can create my own custom plot. I execute my own frontend.exe to start the
> experiment. I can get a midas file after the experiment, but I am not sure about
> how to retrieve the online data. I know that rootana can help us to get the
> online plots, but the instructions in rootana is not clear. Can anyone give me
> some suggestion? Thank you.
The current package for analyzing MIDAS data is the "m" analyzer, usually in the manalyzer subdirectory of your midas package,
but it can also be used stand-alone without MIDAS.
There is several examples:
manalyzer_example_cxx.cxx - a simple "c++" example shows how to extra midas bank data
manalyzer_example_root.cxx - how to create ROOT histograms (that you can see online using jsroot)
manalyzer_example_root_graphics.cxx - how to create a ROOT graphical program (obsoleted by jsroot, but still possible)
manalyzer_example_flow*.cxx - more advanced examples on using a flow analyzer
Documentation is in README.md
Unfortunately there is no tutorial or 5 min youtube explainer, each experiment needs are very different, there is no way to
write a one-size-fits-all recipe.
Please take a look at the existing examples first, then send me a PM with any additional questions (or ask here). If you can
explain what kind of data you have and how you want to look at it, I should be able to guide you through writing an appropriate
manalyzer module.
K.O. |
24 Jul 2025, Konstantin Olchanski, Bug Fix, support for large history files
|
FILE history code (mhf_*.dat files) did not support reading history files bigger than about 2GB, this is now
fixed on branch "feature/history_off64_t" (in final testing, to be merged ASAP).
History files were never meant to get bigger than about 100 MBytes, but it turns out large files can still
happen:
1) files are rotated only when history is closed and reopened
2) we removed history close and open on run start
3) so files are rotated only when mlogger is restarted
In the old code, large files would still happen if some equipment writes a lot of data (I have a file from
Stefan with history record size about 64 kbytes, written at 1/second, MIDAS handles this just fine) or if
there is no runs started and stopped for a long time.
There are reasons for keeping file size smaller:
a) I would like to use mmap() to read history files, and mmap() of a 100 Gbyte file on a 64 Gbyte RAM
machine would not work very well.
b) I would like to implement compressed history files and decompression of a 100 Gbyte file takes much
longer than decompression of a 100 Mbyte file. it is better if data is in smaller chunks.
(it is easy to write a utility program to break-up large history files into smaller chunks).
Why use mmap()? I note that the current code does 1 read() syscall per history record (it is much better to
read data in bigger chunks) and does multiple seek()/read() syscalls to find the right place in the history
file (plays silly buggers with the OS read-ahead and data caching). mmap() eliminates all syscalls and has
the potential to speed things up quite a bit.
K.O. |
23 Jul 2025, Konstantin Olchanski, Suggestion, K.O.'s guide to new C/C++ data types
|
Over the last 10 years, the traditional C/C++ data types have been
displaced by a hodgepodge of new data types that promise portability
and generate useful (and not so useful) warnings, for example:
for (int i=0; i<array_of_10_elements.size(); i++)
is now a warning with a promise of crash (in theory, even if "int" is 64 bit).
"int" and "long" are dead, welcome "size_t", "off64_t" & co.
What to do, what to do? This is what I figured out:
1) for data returned from hardware: use uint16_t, uint32_t, uint64_t, uint128_t (u16, u32, u64 in
the Linux kernel), they have well defined width to match hardware (FPGA, AXI, VME, etc) data
widths.
2) for variables used with strlen(), array.size(), etc: use size_t, a data type wide enough to
store the biggest data size possible on this hardware (32-bit on 32-bit machines, 64-bit on 64-bit
machines). use with printf("%zu").
3) for return values of read() and write() syscalls: use ssize_t and observe an inconsistency,
read() and write() syscalls take size_t (32/64 bits), return ssize_t (31/63 bits) and the error
check code cannot be written without having to defeat the C/C++ type system (a cast to size_t):
size_t s = 100;
void* ptr = malloc(s);
ssize_t rd = read(fd, ptr, s);
if (rd < 0) { syscall error }
else if ((size_t)rd != s) { short read, important for TCP sockets }
else { good read }
use ssize_t with printf("%zd")
4) file access uses off64_t with lseek64() and ftruncate64(), this is a signed type (to avoid the
cast in the error handling code) with max file size 2^63 (at $/GB, storage for a file of max size
costs $$$$$, you cannot have enough money to afford one). use with printf("%jd", (intmax_t)v).
intmax_t by definition is big enough for all off64_t values, "%jd" is the corresponding printf()
format.
5) there is no inconsistency between 32-bit size_t and 64-bit off64_t, on 32-bit systems you can
only read files in small chunks, but you can lseek64() to any place in the file.
BTW, 64-bit time_t has arrived with Ubuntu LTS 24.04, I will write about this some other time. |
24 Jul 2025, Konstantin Olchanski, Suggestion, K.O.'s guide to new C/C++ data types
|
> for (int i=0; i<array_of_10_elements.size(); i++)
becomes
for (size_t i=0; i<array.size(); i++)
but for a reverse loop, replacing "int" with "size_t" becomes a bug:
for (size_t i=array.size()-1; i>=0; i--)
explodes, last iteration should be with i set to 0, then i--
wraps it around a very big positive value and loop end condition
is still true (i>=0), the loop never ends. (why is there no GCC warning
that with "size_t i", "i>=0" is always true?
a kludge solution is:
for (size_t i=array.size()-1; ; i--) {
do_stuff(i, array[i]);
if (i==0) break;
}
if you do not need the index variable, you can use a reverse iterator (which is missing from a few
container classes).
K.O. |
19 Feb 2025, Lukas Gerritzen, Bug Report, Default write cache size for new equipments breaks compatibility with older equipments
|
We have a frontend for slow control with a lot of legacy code. I wanted to add a new equipment using the
mdev_mscb class. It seems like the default write cache size is 1000000B now, which produces error
messages like this:
12:51:20.154 2025/02/19 [SC Frontend,ERROR] [mfe.cxx:620:register_equipment,ERROR] Write cache size mismatch for buffer "SYSTEM": equipment "Environment" asked for 0, while eqiupment "LED" asked for 10000000
12:51:20.154 2025/02/19 [SC Frontend,ERROR] [mfe.cxx:620:register_equipment,ERROR] Write cache size mismatch for buffer "SYSTEM": equipment "LED" asked for 10000000, while eqiupment "Xenon" asked for 0
I can manually change the write cache size in /Equipment/LED/Common/Write cache size to 0. However, if I delete the LED tree in the ODB, then I get the same problems again. It would be nice if I could either choose the size as 0 in the frontend code, or if the defaults were compatible with our legacy code.
The commit that made the write cache size configurable seems to be from 2019: https://bitbucket.org/tmidas/midas/commits/3619ecc6ba1d29d74c16aa6571e40920018184c0 |
24 Feb 2025, Stefan Ritt, Bug Report, Default write cache size for new equipments breaks compatibility with older equipments
|
The commit that introduced the write cache size check is https://bitbucket.org/tmidas/midas/commits/3619ecc6ba1d29d74c16aa6571e40920018184c0
Unfortunately K.O. added the write cache size to the equipment list, but there is currently no way to change this programmatically from the user frontend code. The options I see are
1) Re-arrange the equipment settings so that the write case size comes to the end of the list which the user initializes, like
{"Trigger", /* equipment name */
{1, 0, /* event ID, trigger mask */
"SYSTEM", /* event buffer */
EQ_POLLED, /* equipment type */
0, /* event source */
"MIDAS", /* format */
TRUE, /* enabled */
RO_RUNNING | /* read only when running */
RO_ODB, /* and update ODB */
100, /* poll for 100ms */
0, /* stop run after this event limit */
0, /* number of sub events */
0, /* don't log history */
"", "", "", "", "", 0, 0},
read_trigger_event, /* readout routine */
10000000, /* write cache size */
},
2) Add a function fe_set_write_case(int size); which goes through the local equipment list and sets the cache size for all equipments to be the same.
I would appreciate some guidance from K.O. who introduced that code above.
/Stefan |
20 Mar 2025, Konstantin Olchanski, Bug Report, Default write cache size for new equipments breaks compatibility with older equipments
|
I think I added the cache size correctly:
{"Trigger", /* equipment name */
{1, 0, /* event ID, trigger mask */
"SYSTEM", /* event buffer */
EQ_POLLED, /* equipment type */
0, /* event source */
"MIDAS", /* format */
TRUE, /* enabled */
RO_RUNNING | /* read only when running */
RO_ODB, /* and update ODB */
100, /* poll for 100ms */
0, /* stop run after this event limit */
0, /* number of sub events */
0, /* don't log history */
"", "", "", "", "", // frontend_host, name, file_name, status, status_color
0, // hidden
0 // write_cache_size <<--------------------- set this to zero -----------
},
}
K.O. |
20 Mar 2025, Konstantin Olchanski, Bug Report, Default write cache size for new equipments breaks compatibility with older equipments
|
the main purpose of the event buffer write cache is to prevent high contention for the
event buffer shared memory semaphore in the pathological case of very high rate of very
small events.
there is a computation for this, I have posted it here several times, please search for
it.
in the nutshell, you want the semaphore locking rate to be around 10/sec, 100/sec
maximum. coupled with smallest event size and maximum practical rate (1 MHz), this
yields the cache size.
for slow control events generated at 1 Hz, the write cache is not needed,
write_cache_size value 0 is the correct setting.
for "typical" physics events generated at 1 kHz, write cache size should be set to fit
10 events (100 Hz semaphore locking rate) to 100 events (10 Hz semaphore locking rate).
unfortunately, one cannot have two cache sizes for an event buffer, so typical frontends
that generate physics data at 1 kHz and scalers and counters at 1 Hz must have a non-
zero write cache size (or semaphore locking rate will be too high).
the other consideration, we do not want data to sit in the cache "too long", so the
cache is flushed every 1 second or so.
all this cache stuff could be completely removed, deleted. result would be MIDAS that
works ok for small data sizes and rates, but completely falls down at 10 Gige speeds and
rates.
P.S. why is high semaphore locking rate bad? it turns out that UNIX and Linux semaphores
are not "fair", they do not give equal share to all users, and (for example) an event
buffer writer can "capture" the semaphore so the buffer reader (mlogger) will never get
it, a pathologic situation (to help with this, there is also a "read cache"). Read this
discussion: https://stackoverflow.com/questions/17825508/fairness-setting-in-semaphore-
class
K.O. |
20 Mar 2025, Konstantin Olchanski, Bug Report, Default write cache size for new equipments breaks compatibility with older equipments
|
> the main purpose of the event buffer write cache
how to control the write cache size:
1) in a frontend, all equipments should ask for the same write cache size, both mfe.c and
tmfe frontends will complain about mismatch
2) tmfe c++ frontend, per tmfe.md, set fEqConfWriteCacheSize in the equipment constructor, in
EqPreInitHandler() or EqInitHandler(), or set it in ODB. default value is 10 Mbytes or value
of MIN_WRITE_CACHE_SIZE define. periodic cache flush period is 0.5 sec in
fFeFlushWriteCachePeriodSec.
3) mfe.cxx frontend, set it in the equipment definition (number after "hidden"), set it in
ODB, or change equipment[i].write_cache_size. Value 0 sets the cache size to
MIN_WRITE_CACHE_SIZE, 10 Mbytes.
4) in bm_set_cache_size(), acceptable values are 0 (disable the cache), MIN_WRITE_CACHE_SIZE
(10 Mbytes) or anything bigger. Attempt to set the cache smaller than 10 Mbytes will set it
to 10 Mbytes and print an error message.
All this is kind of reasonable, as only two settings of write cache size are useful: 0 to
disable it, and 10 Mbytes to limit semaphore locking rate to reasonable value for all event
rate and size values practical on current computers.
In mfe.cxx it looks to be impossible to set the write cache size to 0 (disable it), but
actually all you need is call "bm_set_cache_size(equipment[0].buffer_handle, 0, 0);" in
frontend_init() (or is it in begin_of_run()?).
K.O. |
20 Mar 2025, Konstantin Olchanski, Bug Report, Default write cache size for new equipments breaks compatibility with older equipments
|
> > the main purpose of the event buffer write cache
> how to control the write cache size:
OP provided insufficient information to say what went wrong for them, but do try this:
1) in ODB, for all equipments, set write_cache_size to 0
2) in the frontend equipment table, set write_cache_size to 0
That is how it is done in the example frontend: examples/experiment/frontend.cxx
If this configuration still produces an error, we may have a bug somewhere, so please let us know how it shakes out.
K.O. |
21 Mar 2025, Stefan Ritt, Bug Report, Default write cache size for new equipments breaks compatibility with older equipments
|
> All this is kind of reasonable, as only two settings of write cache size are useful: 0 to
> disable it, and 10 Mbytes to limit semaphore locking rate to reasonable value for all event
> rate and size values practical on current computers.
Indeed KO is correct that only 0 and 10MB make sense, and we cannot mix it. Having the cache setting in the equipment table is
cumbersome. If you have 10 slow control equipment (cache size zero), you need to add many zeros at the end of 10 equipment
definitions in the frontend.
I would rather implement a function or variable similar to fEqConfWriteCacheSize in the tmfe framework also in the mfe.cxx
framework, then we need only to add one line llike
gEqConfWriteCacheSize = 0;
in the frontend.cxx file and this will be used for all equipments of that frontend. If nobody complains, I will do that in April when I'm
back from Japan.
Stefan |
25 Mar 2025, Konstantin Olchanski, Bug Report, Default write cache size for new equipments breaks compatibility with older equipments
|
> > All this is kind of reasonable, as only two settings of write cache size are useful: 0 to
> > disable it, and 10 Mbytes to limit semaphore locking rate to reasonable value for all event
> > rate and size values practical on current computers.
>
> Indeed KO is correct that only 0 and 10MB make sense, and we cannot mix it. Having the cache setting in the equipment table is
> cumbersome. If you have 10 slow control equipment (cache size zero), you need to add many zeros at the end of 10 equipment
> definitions in the frontend.
>
> I would rather implement a function or variable similar to fEqConfWriteCacheSize in the tmfe framework also in the mfe.cxx
> framework, then we need only to add one line llike
>
> gEqConfWriteCacheSize = 0;
>
> in the frontend.cxx file and this will be used for all equipments of that frontend. If nobody complains, I will do that in April when I'm
> back from Japan.
Cache size is per-buffer. If different equipments write into different event buffers, should be possible to set different cache sizes.
Perhaps have:
set_write_cache_size("SYSTEM", 0);
set_write_cache_size("BUF1", bigsize);
with an internal std::map<std::string,size_t>; for write cache size for each named buffer
K.O. |
21 Jul 2025, Stefan Ritt, Bug Report, Default write cache size for new equipments breaks compatibility with older equipments
|
> Perhaps have:
>
> set_write_cache_size("SYSTEM", 0);
> set_write_cache_size("BUF1", bigsize);
>
> with an internal std::map<std::string,size_t>; for write cache size for each named buffer
Ok, this is implemented now in mfed.cxx and called from examples/experiment/frontend.cxx
Stefan |
13 Jul 2025, Zaher Salman, Info, PySequencer
|
As many of you already know Ben introduced the new PySequencer that allows running python scripts from MIDAS. In the last couple of month we have been working on integrating it into the MIDAS pages. We think that it is now ready for general testing.
To use the PySequencer:
1- Enable it from /Experiment/Menu
2- Refresh the pages to see a new PySequencer menu item
3- Click on it to start writing and executing your python script.
The look and feel are identical to the msequencer pages (both use the same JavaScript code).
Please report problems and bug here.
Known issues:
The first time you start the PySequencer program it may fail. To fix this copy:
$MIDASSYS/python/examples/pysequencer_script_basic.py
to
online/userfiles/sequencer/
and set /PySequencer/State/Filename to pysequencer_script_basic.py |
04 Jul 2025, Mark Grimes, Bug Report, Memory leaks in mhttpd
|
Something changed in our system and we started seeing memory leaks in mhttpd again. I guess someone
updated some front end or custom page code that interacted with mhttpd differently.
I found a few memory leaks in some (presumably) rarely seen corner cases and we now see steady
memory usage. The branch is fix/memory_leaks
(https://bitbucket.org/tmidas/midas/branch/fix/memory_leaks) and I opened pull request #55
(https://bitbucket.org/tmidas/midas/pull-requests/55). I couldn't find a BitBucket account for you
Konstantin to add as a reviewer, so it currently has none.
Thanks,
Mark. |
04 Jun 2025, Mark Grimes, Bug Report, Memory leak in mhttpd binary RPC code
|
Hi,
During an evening of running we noticed that memory usage of mhttpd grew to close to 100Gb. We think we've traced this to the following issue when making RPC calls.
- The brpc method allocates memory for the response at src/mjsonrpc.cxx#lines-3449.
- It then makes the call at src/mjsonrpc.cxx#lines-3460, which may set `buf_length` to zero if the response was empty.
- It then uses `MJsonNode::MakeArrayBuffer` to pass ownership of the memory to an `MJsonNode`, providing `buf_length` as the size.
- When the `MJsonNode` is destructed at mjson.cxx#lines-657, it only calls `free` on the buffer if the size is greater than zero.
Hence, mhttpd will leak at least 1024 bytes for every binary RPC call that returns an empty response.
I tried to submit a pull request to fix this but I don't have permission to push to https://bitbucket.org/tmidas/mjson.git. Could somebody take a look?
Thanks,
Mark. |
04 Jun 2025, Konstantin Olchanski, Bug Report, Memory leak in mhttpd binary RPC code
|
Noted. I will look at this asap. K.O.
[quote="Mark Grimes"]Hi,
During an evening of running we noticed that memory usage of mhttpd grew to
close to 100Gb. We think we've traced this to the following issue when making
RPC calls.
[LIST]
[*] The brpc method allocates memory for the response at
[URL=https://bitbucket.org/tmidas/midas/src/67db8627b9ae381e5e28800dfc4c350c5bd0
5e3f/src/mjsonrpc.cxx#lines-3449]src/mjsonrpc.cxx#lines-3449[/URL].
[*] It then makes the call at
[URL=https://bitbucket.org/tmidas/midas/src/67db8627b9ae381e5e28800dfc4c350c5bd0
5e3f/src/mjsonrpc.cxx#lines-3460]src/mjsonrpc.cxx#lines-3460[/URL], which may
set `buf_length` to zero if the response was empty.
[*] It then uses `MJsonNode::MakeArrayBuffer` to pass ownership of the memory to
an `MJsonNode`, providing `buf_length` as the size.
[*] When the `MJsonNode` is destructed at
[URL=https://bitbucket.org/tmidas/mjson/src/9d01b3f72722bbf7bcec32ae218fcc0825cc
9e7f/mjson.cxx#lines-657]mjson.cxx#lines-657[/URL], it only calls `free` on the
buffer if the size is greater than zero.
[/LIST]
Hence, mhttpd will leak at least 1024 bytes for every binary RPC call that
returns an empty response.
I tried to submit a pull request to fix this but I don't have permission to push
to https://bitbucket.org/tmidas/mjson.git. Could somebody take a look?
Thanks,
Mark.[/quote] |
07 Jun 2025, Mark Grimes, Bug Report, Memory leak in mhttpd binary RPC code
|
Hi,
We applied an intermediate fix for this locally and it seems to have fixed our issue. The attached plot shows the percentage memory use on our machine with 128 Gb memory, as a rough proxy for mhttpd memory use. After applying our fix mhttpd seems to be happy using ~7% of the memory after being up for 2.5 days.
Our fix to mjson was:
diff --git a/mjson.cxx b/mjson.cxx
index 17ee268..2443510 100644
--- a/mjson.cxx
+++ b/mjson.cxx
@@ -654,8 +654,7 @@ MJsonNode::~MJsonNode() // dtor
delete subnodes[i];
subnodes.clear();
- if (arraybuffer_size > 0) {
- assert(arraybuffer_ptr != NULL);
+ if (arraybuffer_ptr != NULL) {
free(arraybuffer_ptr);
arraybuffer_size = 0;
arraybuffer_ptr = NULL;
We also applied the following in midas for good measure, although I don't think it contributed to the leak we were seeing:
diff --git a/src/mjsonrpc.cxx b/src/mjsonrpc.cxx
index 2201d228..38f0b99b 100644
--- a/src/mjsonrpc.cxx
+++ b/src/mjsonrpc.cxx
@@ -3454,6 +3454,7 @@ static MJsonNode* brpc(const MJsonNode* params)
status = cm_connect_client(name.c_str(), &hconn);
if (status != RPC_SUCCESS) {
+ free(buf);
return mjsonrpc_make_result("status", MJsonNode::MakeInt(status));
}
I hope this is useful to someone. As previously mentioned we make heavy use of binary RPC, so maybe other experiments don't run into the same problem.
Thanks,
Mark. |
10 Jun 2025, Konstantin Olchanski, Bug Report, Memory leak in mhttpd binary RPC code
|
I confirm that MJSON_ARRAYBUFFER does not work correctly for zero-size buffers,
buffer is leaked in the destructor and copied as NULL in MJsonNode::Copy().
I also confirm memory leak in mjsonrpc "brpc" error path (already fixed).
Affected by the MJSON_ARRAYBUFFER memory leak are "brpc" (where user code returns
a zero-size data buffer) and "js_read_binary_file" (if reading from an empty
file, return of "new char[0]" is never freed).
"receive_event" and "read_history" RPCs never use zero-size buffers and are not
affected by this bug.
mjson commit c798c1f0a835f6cea3e505a87bbb4a12b701196c
midas commit 576f2216ba2575b8857070ce7397210555f864e5
rootana commit a0d9bb4d8459f1528f0882bced9f2ab778580295
Please post bug reports a plain-text so I can quote from them.
K.O. |
15 Jun 2025, Mark Grimes, Bug Report, Memory leak in mhttpd binary RPC code
|
Many thanks for the fix. We've applied and see better memory performance. We still have to kill and restart
mhttpd after a few days however. I think the official fix is missing this part:
diff --git a/src/mjsonrpc.cxx b/src/mjsonrpc.cxx
index 2201d228..38f0b99b 100644
--- a/src/mjsonrpc.cxx
+++ b/src/mjsonrpc.cxx
@@ -3454,6 +3454,7 @@ static MJsonNode* brpc(const MJsonNode* params)
status = cm_connect_client(name.c_str(), &hconn);
if (status != RPC_SUCCESS) {
+ free(buf);
return mjsonrpc_make_result("status", MJsonNode::MakeInt(status));
}
When the other process returns a failure the memory block is also currently leaked. I originally stated "...although I
don't think it contributed to the leak we were seeing" but it seems this was false.
Thanks,
Mark.
> I confirm that MJSON_ARRAYBUFFER does not work correctly for zero-size buffers,
> buffer is leaked in the destructor and copied as NULL in MJsonNode::Copy().
>
> I also confirm memory leak in mjsonrpc "brpc" error path (already fixed).
>
> Affected by the MJSON_ARRAYBUFFER memory leak are "brpc" (where user code returns
> a zero-size data buffer) and "js_read_binary_file" (if reading from an empty
> file, return of "new char[0]" is never freed).
>
> "receive_event" and "read_history" RPCs never use zero-size buffers and are not
> affected by this bug.
>
> mjson commit c798c1f0a835f6cea3e505a87bbb4a12b701196c
> midas commit 576f2216ba2575b8857070ce7397210555f864e5
> rootana commit a0d9bb4d8459f1528f0882bced9f2ab778580295
>
> Please post bug reports a plain-text so I can quote from them.
>
> K.O. |
23 Jun 2025, Stefan Ritt, Bug Report, Memory leak in mhttpd binary RPC code
|
Since this memory leak is quite obvious, I pushed the fix to develop.
Stefan |
10 Jun 2025, Nik Berger, Bug Report, History variables with leading spaces
|
By accident we had history variables with leading spaces. The history schema check then decides that this is a new variable (the leading space is not read from the history file) and starts a new file. We found this because the run start became slow due to the many, many history files created. It would be nice to just get an error if one has a malformed variable name like this.
How to reproduce: Try to put a variable with a leading space in the name into the history, repeatedly start runs.
Sugested fix: Produce an error if a history variable has a leading space. |
19 Jun 2025, Stefan Ritt, Bug Report, History variables with leading spaces
|
I added now code to the logger so it properly complains if there would be a leading space in a variable name.
Stefan
> By accident we had history variables with leading spaces. The history schema check then decides that this is a new variable (the leading space is not read from the history file) and starts a new file. We found this because the run start became slow due to the many, many history files created. It would be nice to just get an error if one has a malformed variable name like this.
>
> How to reproduce: Try to put a variable with a leading space in the name into the history, repeatedly start runs.
> Sugested fix: Produce an error if a history variable has a leading space. |
19 Jun 2025, Frederik Wauters, Bug Report, add history variables
|
I have encounter this a few times
* Make a new history panel
* Use the web GUI to add history variables
* When I am at the "add history variables" panel, there is not scroll option. So
depending on the size and zoom of my screen, some variables further down the list
can not be selected
tried Chrome and Firefox |
|