| ID |
Date |
Author |
Topic |
Subject |
|
3204
|
06 Feb 2026 |
Stefan Ritt | Bug Report | omnibus bugs from running DarkLight |
> 5) ODB editor "create link" link target name is limited to 32 bytes, links cannot be created (dl-server-2), ok
> on daq17 with current MIDAS.
Works for me with the current version.
> 6) MIDAS on dl-server-2 is "installed" in such a way that there is no connection to the git repository, no way
> to tell what git checkout it corresponds to. Help page just says "branch master", git-revision.h is empty. We
> should discourage such use of MIDAS and promote our "normal way" where for all MIDAS binary programs we know
> what source code and what git commit was used to build them.
Not sure if you have seen it. I make a "install" script to clone, compile and install midas. Some people use this already. Maybe give it a shot. Might need
adjustment for different systems, I certainly haven't covered all corner cases. But on a RaspberryPi it's then just one command to install midas, modify
the environment, install mhttpd as a service and load the ODB defaults. I know that some people want it "their way" and that's ok, but for the novice user
that might be a good starting point. It's documented here: https://daq00.triumf.ca/MidasWiki/index.php/Install_Script
The install script is plain shell, so should be easy to be understandable.
> 6a) MIDAS on dl-server-2 had a pretty much non-functional history display, I reported it here, Stefan provided
> a fix, I manually retrofitted it into dl-server-2 MIDAS and we were able to run the experiment. (good)
>
> 6b) bug (5) suggests that there is more bugs being introduced and fixed without any notice to other midas
> users (via this forum or via the bitbucket bug tracker).
If I would notify everybody about a new bug I introduced, I would know that it's a bug and I would not introduce it ;-)
For all the fixes I encourage people to check the commit log. Doing an elog entry for every bug fix would be considered spam by many people because
that can be many emails per week. The commit log is here: https://bitbucket.org/tmidas/midas/commits/branch/develop
If somebody volunteers to consolidate all commits and make a monthly digest to be posted here, I'm all in favor, but I'm not that individual.
Stefan |
|
3205
|
12 Feb 2026 |
Stefan Ritt | Bug Report | omnibus bugs from running DarkLight |
Now I had a similar case that the browser froze when showing 24h of data. Tuned out that 80k points are a bit much. I changed the code so that it starts binning when showing 8h or more. This is not a perfect solution. The code should check at which interval data is written, then
automatically start binning when approaching 4000 points or more. That would however require more complicated code, so I leave it as it is right now. Feedback welcome.
Stefan |
|
3210
|
23 Apr 2026 |
Pavel Murat | Bug Report | increasing the max number of hot links in ODB |
Dear MIDAS experts,
when I attempted to increase the max number of hotlinks in ODB , defined as
#define MAX_OPEN_RECORDS 256 /**< number of open DB records */
I started running into an assertion in midas/src/odb.cxx
https://bitbucket.org/tmidas/midas/src/fa5457b5274a6b42c5ed8b6dea5e3cdd43de38fe/src/odb.cxx#lines-1525 :
assert(sizeof(DATABASE_CLIENT) == 2112);
is it possible that the size of the DATABASE_CLIENT structure should be checked against 64+sizeof(OPEN_RECORD)*MAX_OPEN_RECORDS ?
- 64 clearly can be expressed in a better maintainable form
UPDATE: similar consideration holds for the size of the DATABLE_HEADER structure, which is also checked against a constant
https://bitbucket.org/tmidas/midas/src/fa5457b5274a6b42c5ed8b6dea5e3cdd43de38fe/src/odb.cxx#lines-1526
-- many thanks, regards, Pavel
|
|
Draft
|
23 Apr 2026 |
Nick Hastings | Bug Report | increasing the max number of hot links in ODB |
> Dear MIDAS experts,
>
> when I attempted to increase the max number of hotlinks in ODB , defined as
>
> #define MAX_OPEN_RECORDS 256 /**< number of open DB records */
>
> I started running into an assertion in midas/src/odb.cxx
>
> https://bitbucket.org/tmidas/midas/src/fa5457b5274a6b42c5ed8b6dea5e3cdd43de38fe/src/odb.cxx#lines-1525 :
>
> assert(sizeof(DATABASE_CLIENT) == 2112);
>
> is it possible that the size of the DATABASE_CLIENT structure should be checked against 64+sizeof(OPEN_RECORD)*MAX_OPEN_RECORDS ?
> - 64 clearly can be expressed in a better maintainable form
Yes, this assert needs to be updated if you increase MAX_OPEN_RECORDS. See
https://daq00.triumf.ca/MidasWiki/index.php/FAQ#Increasing_Number_of_Hot-links
> UPDATE: similar consideration holds for the size of the DATABLE_HEADER structure, which is also checked against a constant
>
> https://bitbucket.org/tmidas/midas/src/fa5457b5274a6b42c5ed8b6dea5e3cdd43de38fe/src/odb.cxx#lines-1526
Yes DATABASE_HEADER can also be updated, but the associated assert()s need to be updated too.
FYI I have done both of these things and have attached patches.
Cheers,
Nick. |
| Attachment 1: 0001-Increase-MAX_OPEN_RECORDS-to-increase-the-max-number.patch
|
From 6ac1ad23e1e0c8fcfdbb25844ace9de3ab7a993b Mon Sep 17 00:00:00 2001
From: Nick Hastings <hastings@post.kek.jp>
Date: Tue, 19 Sep 2023 08:57:11 +0900
Subject: [PATCH 1/2] Increase MAX_OPEN_RECORDS to increase the max number of
hot_links
In 2011 the number of hotlinks for the gsc was increased from the
default 256 to 2560. See https://elog.nd280.org/elog/GSC/425. Since we
are getting more equipment, increase to 4096. General
instructions can be found on the midas wiki at.
https://daq00.triumf.ca/MidasWiki/index.php/FAQ#Increasing_Number_of_Hot-links
When increasing this number the size of two structs will also
increase. Midas checks the size of these structs are correct, so
updated the check sizes.
DATABASE_CLIENT = 64 + 8*MAX_OPEN_ERCORDS
default: = 64 + 8*256
= 2112 -> Confirmed in odb.cxx
updated: = 64 + 8*4096
= 32832
DATABASE_HEADER = 64 + 64*DATABASE_CLIENT
default: = 64 + 64*2112
= 135232 -> Confirmed in odb.cxx
updated: = 64 + 64*32832
= 2101312
---
include/midas.h | 2 +-
src/odb.cxx | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/include/midas.h b/include/midas.h
index ec6b54a1..46d653f4 100644
--- a/include/midas.h
+++ b/include/midas.h
@@ -273,7 +273,7 @@ class MJsonNode; // forward declaration from mjson.h
#define HOST_NAME_LENGTH 256 /**< length of TCP/IP names */
#define MAX_CLIENTS 64 /**< client processes per buf/db */
#define MAX_EVENT_REQUESTS 10 /**< event requests per client */
-#define MAX_OPEN_RECORDS 256 /**< number of open DB records */
+#define MAX_OPEN_RECORDS 4096 /**< number of open DB records */
#define MAX_ODB_PATH 256 /**< length of path in ODB */
#define BANKLIST_MAX 4096 /**< max # of banks in event */
#define STRING_BANKLIST_MAX BANKLIST_MAX * 4 /**< for bk_list() */
diff --git a/src/odb.cxx b/src/odb.cxx
index e4e2bd35..bc084612 100644
--- a/src/odb.cxx
+++ b/src/odb.cxx
@@ -1477,8 +1477,8 @@ static void db_validate_sizes()
assert(sizeof(KEY) == 68);
assert(sizeof(KEYLIST) == 12);
assert(sizeof(OPEN_RECORD) == 8);
- assert(sizeof(DATABASE_CLIENT) == 2112);
- assert(sizeof(DATABASE_HEADER) == 135232);
+ assert(sizeof(DATABASE_CLIENT) == 32832);
+ assert(sizeof(DATABASE_HEADER) == 2101312);
assert(sizeof(EVENT_HEADER) == 16);
//assert(sizeof(EQUIPMENT_INFO) == 696); has been moved to dynamic checking inside mhttpd.c
assert(sizeof(EQUIPMENT_STATS) == 24);
--
2.47.3
|
| Attachment 2: 0002-Increase-MAX_CLIENTS-from-64-to-256.patch
|
From d0b0c5f316944a3133a26cc5340ec293f719b500 Mon Sep 17 00:00:00 2001
From: Nick Hastings <hastings@post.kek.jp>
Date: Tue, 19 Sep 2023 09:43:04 +0900
Subject: [PATCH 2/2] Increase MAX_CLIENTS from 64 to 256
In 2012 MAX_CLIENTS was increased from 64 to 128 for the t2kgsc. This
allowed double the number of frontends or clients. See elog entry at
https://elog.nd280.org/elog/GSC/808
For the new gsc this is further increased to 256. This necessitates
updating the size checks of the BUFFER_HEADER and DATABASE_HEADER structs.
BUFFER_HEADER = 32 + 7*4 + 256*MAX_CLIENTS
current: = 32 + 7*4 + 256*64
= 16444 -> Confirmed in odb.cxx
updated: = 32 + 7*4 + 256*256
= 65596
DATABASE_HEADER = 32 + 8*4 + 32832*MAX_CLIENTS
current: = 32 + 8*4 + 32832*64
= 2101312 -> Confirmed in odb.cxx
updated: = 32 + 8*4 + 32832*256
= 8405056
N.B. When the corresponding change was made in 2012 the value of
MAX_RPC_CONNECTION was increased from 64 to 96. No equivalent change
is made now since it was removed from midas in 2021 commit 9c93bc7f
"RPC_SERVER_ACCEPTION cleanup".
---
include/midas.h | 2 +-
src/odb.cxx | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/include/midas.h b/include/midas.h
index 46d653f4..43ca476e 100644
--- a/include/midas.h
+++ b/include/midas.h
@@ -271,7 +271,7 @@ class MJsonNode; // forward declaration from mjson.h
#define NAME_LENGTH 32 /**< length of names, mult.of 8! */
#define HOST_NAME_LENGTH 256 /**< length of TCP/IP names */
-#define MAX_CLIENTS 64 /**< client processes per buf/db */
+#define MAX_CLIENTS 256 /**< client processes per buf/db */
#define MAX_EVENT_REQUESTS 10 /**< event requests per client */
#define MAX_OPEN_RECORDS 4096 /**< number of open DB records */
#define MAX_ODB_PATH 256 /**< length of path in ODB */
diff --git a/src/odb.cxx b/src/odb.cxx
index bc084612..b94cef29 100644
--- a/src/odb.cxx
+++ b/src/odb.cxx
@@ -1469,7 +1469,7 @@ static void db_validate_sizes()
#ifdef OS_LINUX
assert(sizeof(EVENT_REQUEST) == 16); // ODB v3
assert(sizeof(BUFFER_CLIENT) == 256);
- assert(sizeof(BUFFER_HEADER) == 16444);
+ assert(sizeof(BUFFER_HEADER) == 65596);
assert(sizeof(HIST_RECORD) == 20);
assert(sizeof(DEF_RECORD) == 40);
assert(sizeof(INDEX_RECORD) == 12);
@@ -1478,7 +1478,7 @@ static void db_validate_sizes()
assert(sizeof(KEYLIST) == 12);
assert(sizeof(OPEN_RECORD) == 8);
assert(sizeof(DATABASE_CLIENT) == 32832);
- assert(sizeof(DATABASE_HEADER) == 2101312);
+ assert(sizeof(DATABASE_HEADER) == 8405056);
assert(sizeof(EVENT_HEADER) == 16);
//assert(sizeof(EQUIPMENT_INFO) == 696); has been moved to dynamic checking inside mhttpd.c
assert(sizeof(EQUIPMENT_STATS) == 24);
--
2.47.3
|
|
3212
|
24 Apr 2026 |
Konstantin Olchanski | Bug Report | increasing the max number of hot links in ODB |
> when I attempted to increase the max number of hotlinks in ODB , defined as
> #define MAX_OPEN_RECORDS 256 /**< number of open DB records */
> assert(sizeof(DATABASE_CLIENT) == 2112);
Yes, it is intended to work like this. If you change MAX_OPEN_RECORDS (and some settings),
you break binary compatibility with standard MIDAS and the asserts inform you about it.
It is not a light step to take - you have to recompile all MIDAS clients, and if you miss
one and run it against your non-standard MIDAS, kaboom everything will go,
there is no safety net against this.
In the ALPHA experiment at CERN, for years we have been running with MAX_OPEN_RECORDS set to 2560,
and it works, you have to change both MAX_OPEN_RECORDS in midas.h and the expected values
in the assert() statements.
The new correct values you do not need to guess or compute yourself, the code to print
them is right there and it is easy to enable.
Replacing the numeric constants with computed values of course would completely defeat
the purpose of the tests - to catch the situation where by mistake or by ignorance
(or by miscompilation) sizes of critical data structures become different from those
normally expected.
K.O. |
|
3213
|
25 Apr 2026 |
Pavel Murat | Bug Report | increasing the max number of hot links in ODB |
I see - thank you for the explanation!
Indeed, updating MIDAS clients on each and every RPI etc in a running experiment may be a real challenge.
Thinking forward - would it help if the ODB clients, upon initial connection but before doing anything else
were reading the ODB parameters from the ODB itself, so the clients were "learning" about the ODB structure
dynamically, at run time? Or that knowledge has to be static ?
-- thanks, regards, Pavel |
|
3214
|
26 Apr 2026 |
Stefan Ritt | Bug Report | increasing the max number of hot links in ODB |
I wonder why one needs more than 256 hotlinks at all. Please note that with the odbxx "watch" API, you can hotline a whole subdirectory, and get notified if ANY of the
underlying values or subdirectories change. In principle, one could have one hotlink to "/" and see all changes in the ODB (although that does not make sense and might slow
down ODB access a bit).
Try the odbxx_test.cpp example in MIDAS. In line 210 it puts a single hotlink to /Experiment. If you change anything under /Experiment, the program gets notified. By checking the
path of the changed ODB entry, it can figure out which of the subways have been changed:
// watch ODB key for any change with lambda function
midas::odb ow("/Experiment");
ow.watch([](midas::odb &o) {
std::cout << "Value of key \"" + o.get_full_path() + "\" changed to " << o << std::endl;
});
Maybe that would solve your problem without having to change the maximum number of hotlinks.
Stefan |
|
3215
|
27 Apr 2026 |
Konstantin Olchanski | Bug Report | increasing the max number of hot links in ODB |
> Indeed, updating MIDAS clients on each and every RPI etc in a running experiment may be a real challenge.
actually, only local clients must be rebuilt, remote clients connecting to the mserver do not care about ODB
internal structure.
> Thinking forward - would it help if the ODB clients, upon initial connection but before doing anything else
> were reading the ODB parameters from the ODB itself, so the clients were "learning" about the ODB structure
> dynamically, at run time? Or that knowledge has to be static ?
unfortunately, the "open records" structure is allocated at compile-time inside the ODB header,
making any change to this would break binary compatibility.
I think it is possible to allocate "space for additional open records" in the ODB data area
and have the ODB open records code use it in addition to the compile-time allocated
space in the database header. This would also work for extending MAX_CLIENTS.
Of course in this approach, old midas clients would see only the clients and open records
in the database header, new midas clients would see the additional data.
It is not super hard to add this code...
K.O. |
|
3216
|
27 Apr 2026 |
Konstantin Olchanski | Bug Report | increasing the max number of hot links in ODB |
> I wonder why one needs more than 256 hotlinks at all.
I confirm that ALPHA is running with MAX_OPEN_RECORDS changed from 256 to 2048,
this is the only experiment I know of that had to increase any MIDAS ODB defaults.
The reason for this is mlogger, it opens an open record for each variable in each equipment.
This should be changed to 1 db_watch per equipment. We talked about it, but I guess we never did it.
I think this task just went almost to the top of my MIDAS to-do list.
K.O. |
|
3217
|
27 Apr 2026 |
Nick Hastings | Bug Report | increasing the max number of hot links in ODB |
For the record, the ND280 (T2K near detector) MIDAS GSC was initially set up
with MAX_OPEN_RECORDS = 2560 and MAX_CLIENTS = 128.
In 2023 one fairly simple part of the detector was replaced with several
other more complex systems (many more midas frontends, equipments, and
variables being logged) so we updated MAX_OPEN_RECORDS = 4096 and
MAX_CLIENTS = 256.
Nick. |
|
3218
|
27 Apr 2026 |
Pavel Murat | Bug Report | increasing the max number of hot links in ODB |
> I wonder why one needs more than 256 hotlinks at all. Please note that with the odbxx "watch" API, you can hotline a whole subdirectory, and get notified if ANY of the
> underlying values or subdirectories change. In principle, one could have one hotlink to "/" and see all changes in the ODB (although that does not make sense and might slow
> down ODB access a bit).
Thanks ! - I didn't know that. I did run into a number of hotlinks limit via mlogger which complained about not being able to create a hotlink
to yet another event. Doubling the default value of MAX_OPEN_RECORDS solved the problem.
I don't know the exact arithmetic defining the number of hotlinks in the system, but my today's case is a case of
- 36 (linux servers) +18 (RPI) monitoring frontends managing one or several different equipment items each.
- Each equipment item sends to ODB at least one monitoring event
- in addition, each frontend created an individual hotlink for handling interactive commands
- for MAX_OPEN_RECORDS=256, 4 equipment items per frontend easily make it into the dangerous zone.
"Equipment items" also include the online processes running on the distributed computing farm processing the data ..
(we are not using MIDAS event building capabilities)
>
> Try the odbxx_test.cpp example in MIDAS. In line 210 it puts a single hotlink to /Experiment. If you change anything under /Experiment, the program gets notified. By checking the
> path of the changed ODB entry, it can figure out which of the subways have been changed:
>
> // watch ODB key for any change with lambda function
> midas::odb ow("/Experiment");
> ow.watch([](midas::odb &o) {
> std::cout << "Value of key \"" + o.get_full_path() + "\" changed to " << o << std::endl;
> });
>
>
> Maybe that would solve your problem without having to change the maximum number of hotlinks.
I'll see how much mileage one can make here, but so far it looks that it is the number of various monitoring events
handled by the mlogger which drives the number of hotlinks
-- thanks, regards, Pavel |
|
3219
|
27 Apr 2026 |
Pavel Murat | Bug Report | increasing the max number of hot links in ODB |
> > I wonder why one needs more than 256 hotlinks at all.
>
> I confirm that ALPHA is running with MAX_OPEN_RECORDS changed from 256 to 2048,
> this is the only experiment I know of that had to increase any MIDAS ODB defaults.
>
> The reason for this is mlogger, it opens an open record for each variable in each equipment.
>
> This should be changed to 1 db_watch per equipment. We talked about it, but I guess we never did it.
>
> I think this task just went almost to the top of my MIDAS to-do list.
I definitely had many more than 256 variables successfully monitored with MAX_OPEN_RECORDS=256.
Is it possible that mlogger creates a hotlink per monitoring event, not per variable ?
- I think, that would make more sense in almost any scenario...
-- thanks, regards, Pavel |
|
3220
|
27 Apr 2026 |
Pavel Murat | Bug Report | increasing the max number of hot links in ODB |
> > Indeed, updating MIDAS clients on each and every RPI etc in a running experiment may be a real challenge.
>
> actually, only local clients must be rebuilt, remote clients connecting to the mserver do not care about ODB
> internal structure.
thanks! I see - local clients do know about the memory mapping, remote ones - don't
> unfortunately, the "open records" structure is allocated at compile-time inside the ODB header,
> making any change to this would break binary compatibility.
right, I guess, what I had in mind would require the very first fODB record to be a format descriptor,
and that would be a breaking change... Anyway, the practical part of the problem is addressed,
so I just add here a link which contains an answer to the original posting (I found it only after the fact):
https://daq00.triumf.ca/MidasWiki/index.php/FAQ#Increasing_Number_of_Hot-links
-- thanks again, regards, Pavel |
|
170
|
22 Oct 2004 |
Konstantin Olchanski | Bug Fix | mhttpd message colouring |
I commited a fix to mhttpd logic that decides which messages should be shown in
"red" colour- before, any message with square brackets and colons would be
highlighted in red. Now only messages matching the pattern [...:...] are
highlighted. The decision logic was moved into a function message_red(). K.O. |
|
174
|
09 Nov 2004 |
Pierre-Andre Amaudruz | Bug Fix | New transition scheme |
Problem:
If cm_set_transition_sequence() is used for changing the sequence number, the
command odbedit> start/stop/resume/pause -v report the propre sequence but the
action on the client side is actually not performed!
Fix:
Local transition table updated in midas.c (1.226)
Note:
The transition number under /system/clients/<pid>/transition...
is used internally. Changing it won't have any effect on the client action
if sequence number is not registered. |
|
200
|
25 Feb 2005 |
Konstantin Olchanski | Bug Fix | fixed: double free in FORMAT_MIDAS ybos.c causing lazylogger crashes |
We stumbled upon and fixed a "double free" bug in src/ybos.c causing crashes in
lazylogger writing .mid files in the FORMAT_MIDAS format (why does it use
ybos.c? Pierre says- for generic file i/o). Why this code had ever worked before
remains a mystery. K.O. |
|
211
|
05 May 2005 |
Konstantin Olchanski | Bug Fix | fix: minor bit rot in the example experiment |
I fixed some minor bit rot in the example experiment: a few minor Makefile
problems, make the analyzer use the current histogram creation macros, etc. I
also added startup and shutdown scripts. These will be documented as we work
through them with our Summer student. K.O. |
|
212
|
02 Aug 2005 |
Konstantin Olchanski | Bug Fix | fix odb corruption when running analzer for the first time |
I have been plagued by ODB corruption when I run the analyzer for the first time
after setting up the new experiment. Some time ago, I traced this to
mana.c::book_ttree() and now I found and fixed the bug, fix now commited to
midas cvs. In book_ttree(), db_find("/Analyzer/Bank switches") was returning an
error and setting hkey to zero. Then we called db_open_record() with hkey==0,
which cased ODB corruption later on. The normal db_validate_hkey() did not catch
this because it considers hkey==0 to be valid (when most likely it is not). K.O. |
|
216
|
18 Aug 2005 |
Konstantin Olchanski | Bug Fix | fix race condition between clients on run start/stop, pause/resume |
It turns out that the new priority sequencing of run state transitions had a
flaw: the frontends, the analyzer and the logger all registered at priority 500
and were invoked in essentially a random order. For example the frontend could
get a begin-run transition before the logger and so start sending data before
the logger opened the output file. Same for the analyzer and same for the end of
run. Also the sequencing for pause/resume run and begin/end run was different
when the two pairs ought to have identical sequencing.
I now commited changes to mana.c and mlogger.c changing their transition sequencing:
start and resume:
200 - logger (mlogger.c, no change)
300 - analyzer (mana.c, was 500)
500 - frontends (mfe.c, no change)
stop and pause:
500 - frontends (mfe.c, no change)
700 - analyzer (mana.c, was 500)
800 - mlogger (mlogger.c, was 500)
P.S. However, even after this change, the TRIUMF ISAC/Dragon experiment still
see an anomaly in the analyzer, where it receives data events after the
end-of-run transition.
K.O. |
|
219
|
01 Sep 2005 |
Stefan Ritt | Bug Fix | fix race condition between clients on run start/stop, pause/resume |
> It turns out that the new priority sequencing of run state transitions had a
> flaw: the frontends, the analyzer and the logger all registered at priority 500
> and were invoked in essentially a random order. For example the frontend could
> get a begin-run transition before the logger and so start sending data before
> the logger opened the output file. Same for the analyzer and same for the end of
> run. Also the sequencing for pause/resume run and begin/end run was different
> when the two pairs ought to have identical sequencing.
>
> I now commited changes to mana.c and mlogger.c changing their transition sequencing:
>
> start and resume:
> 200 - logger (mlogger.c, no change)
> 300 - analyzer (mana.c, was 500)
> 500 - frontends (mfe.c, no change)
>
> stop and pause:
> 500 - frontends (mfe.c, no change)
> 700 - analyzer (mana.c, was 500)
> 800 - mlogger (mlogger.c, was 500)
>
> P.S. However, even after this change, the TRIUMF ISAC/Dragon experiment still
> see an anomaly in the analyzer, where it receives data events after the
> end-of-run transition.
>
> K.O.
Thanks for fixing that bug. It happend because during the implementatoin of the priority
sequencing we have up the pre/post tansition, which took care of the proper sequence
between the logger, frontend and analyzer. The way you modified the sequence is
absolutely correct. It is important to have >10 numbers "around" the frontends (like
450...550) in case one has an experiment with >10 frontends which need to make a
transition in a certain sequence (like the DANCE experiment in Los Alamos). |