Back Midas Rome Roody Rootana
  Midas DAQ System, Page 2 of 135  Not logged in ELOG logo
IDdown Date Author Topic Subject
  2706   11 Feb 2024 Pavel MuratForumnumber of entries in a given ODB subdirectory ?
> For ODB keys of type TID_KEY, the value num_values IS the number of subkeys. 

this logic makes sense, however it doesn't seem to be consistent with the printout of the test example
at the end of https://daq00.triumf.ca/elog-midas/Midas/240203_095803/a.cc . The printout reports 

key.num_values = 1, but the actual number of subkeys = 6, and all subkeys being of TID_KEY type

I'm certain that the ODB subtree in question was not accessed concurrently during the test.

-- regards, Pasha
  2705   08 Feb 2024 Stefan RittForumnumber of entries in a given ODB subdirectory ?
> Konstantin is right: KEY.num_values is not the same as the number of subkeys (should it be ?)

For ODB keys of type TID_KEY, the value num_values IS the number of subkeys. The only issue here is 
what KO mentioned already. If you obtain num_values, start iterating, then someone else might 
change the number of subkeys, then your (old) num_values is off. Therefore it's always good to 
check the return status of all subkey accesses. To do a truely atomic access to a subtree, you need 
db_copy(), but then you have to parse the JSON yourself, and again you have no guarantee that the 
ODB hasn't changed in meantime.

Stefan
  2704   05 Feb 2024 Pavel MuratForumforbidden equipment names ?
Dear MIDAS experts,

I have multiple daq nodes with two data receiving FPGAs on the PCIe bus each. 
The FPGAs come under the names of DTC0 and DTC1. Both FPGAs are managed by the same slow control frontend. 
To distinguish FPGAs of different nodes from each other, I included the hostname to the equipment name, 
so for node=mu2edaq09 the FPGA names are 'mu2edaq09:DTC0' and 'mu2edaq09:DTC1'. 

The history system didn't like the names, complaining that 

21:26:06.334 2024/02/05 [Logger,ERROR] [mlogger.cxx:5142:open_history,ERROR] Equipment name 'mu2edaq09:DTC1' 
contains characters ':', this may break the history system

So the question is : what are the safe equipment/driver naming rules and what characters 
are not allowed in them? - I think this is worth documenting, and the current MIDAS docs at 
 
https://daq00.triumf.ca/MidasWiki/index.php/Equipment_List_Parameters#Equipment_Name 

don't say much about it.

-- many thanks, regards, Pasha
  2703   05 Feb 2024 Ben SmithBug Fixstring --> int64 conversion in the python interface ?
> The symptoms are consistent with a string --> int64 conversion not happening 
> where it is needed. 

Thanks for the report Pasha. Indeed I was missing a conversion in one place. Fixed now!

Ben
  2702   03 Feb 2024 Pavel MuratBug Reportstring --> int64 conversion in the python interface ?
Dear MIDAS experts,

I gave a try to the MIDAS python interface and ran all tests available in midas/python/tests.
Two Int64 tests from test_odb.py had failed (see below), everthong else - succeeded

I'm using a ~ 2.5 weeks-old commit and python 3.9 on SL7 Linux platform.

commit c19b4e696400ee437d8790b7d3819051f66da62d (HEAD -> develop, origin/develop, origin/HEAD)
Author: Zaher Salman <zaher.salman@gmail.com>
Date:   Sun Jan 14 13:18:48 2024 +0100

The symptoms are consistent with a string --> int64 conversion not happening 
where it is needed. 

Perhaps the issue have already been fixed? 

-- many thanks, regards, Pasha
-------------------------------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/mu2etrk/test_stand/pasha_020/midas/python/tests/test_odb.py", line 178, in testInt64
    self.set_and_readback_from_parent_dir("/pytest", "int64_2", [123, 40000000000000000], midas.TID_INT64, True)
  File "/home/mu2etrk/test_stand/pasha_020/midas/python/tests/test_odb.py", line 130, in set_and_readback_from_parent_dir
    self.validate_readback(value, retval[key_name], expected_key_type)
  File "/home/mu2etrk/test_stand/pasha_020/midas/python/tests/test_odb.py", line 87, in validate_readback
    self.assert_equal(val, retval[i], expected_key_type)
  File "/home/mu2etrk/test_stand/pasha_020/midas/python/tests/test_odb.py", line 60, in assert_equal
    self.assertEqual(val1, val2)
AssertionError: 123 != '123'

with the test on line 178 commented out, the test on the next line fails in a similar way:
        
Traceback (most recent call last):
  File "/home/mu2etrk/test_stand/pasha_020/midas/python/tests/test_odb.py", line 179, in testInt64
    self.set_and_readback_from_parent_dir("/pytest", "int64_2", 37, midas.TID_INT64, True)
  File "/home/mu2etrk/test_stand/pasha_020/midas/python/tests/test_odb.py", line 130, in set_and_readback_from_parent_dir
    self.validate_readback(value, retval[key_name], expected_key_type)
  File "/home/mu2etrk/test_stand/pasha_020/midas/python/tests/test_odb.py", line 102, in validate_readback
    self.assert_equal(value, retval, expected_key_type)
  File "/home/mu2etrk/test_stand/pasha_020/midas/python/tests/test_odb.py", line 60, in assert_equal
    self.assertEqual(val1, val2)
AssertionError: 37 != '37'
---------------------------------------------------------------------------
  2701   03 Feb 2024 Pavel MuratForumnumber of entries in a given ODB subdirectory ?
Konstantin is right: KEY.num_values is not the same as the number of subkeys (should it be ?)
For those looking for an example in the future, I attach a working piece of code converted 
from the ChatGPT example, together with its printout.

-- regards, Pasha
Attachment 1: a.cc
#include <stdio.h>
#include <stdlib.h>
#include "midas.h"

int main(int argc, char **argv) {
  HNDLE hDB, hKey;
  INT   status, num_subkeys;
  KEY   key;

  cm_connect_experiment     (NULL, NULL, "Example", NULL);
  cm_get_experiment_database(&hDB, NULL);

  char dir[] = "/ArtdaqConfigurations/demo/mu2edaq09.fnal.gov";

  status = db_find_key(hDB, 0, dir , &hKey);
  if (status != DB_SUCCESS) {
    printf("Error: Cannot find the ODB directory\n");
    return 1;
  }
//-----------------------------------------------------------------------------
// Iterate over all subkeys in the directory
// note: key.num_values is NOT the number of subkeys in the directory
//-----------------------------------------------------------------------------
  db_get_key(hDB, hKey, &key);
  printf("key.num_values: %d\n",key.num_values);

  HNDLE hSubkey;
  KEY   subkey;
  num_subkeys = 0;
  for (int i=0; db_enum_key(hDB, hKey, i, &hSubkey) != DB_NO_MORE_SUBKEYS; ++i) {
    db_get_key(hDB, hSubkey, &subkey);
    printf("Subkey %d: %s, Type: %d\n", i, subkey.name, subkey.type);
    num_subkeys++;
  }

  printf("number of subkeys: %d\n",num_subkeys);

  // Disconnect from MIDAS
  cm_disconnect_experiment();
  return 0;
}
------------------------------------------------------ output:
mu2etrk@mu2edaq09:~/test_stand>test_001
key.num_values: 1
Subkey 0: BoardReader_01, Type: 15
Subkey 1: BoardReader_02, Type: 15
Subkey 2: EventBuilder_01, Type: 15
Subkey 3: EventBuilder_02, Type: 15
Subkey 4: DataLogger_01, Type: 15
Subkey 5: Dispatcher_01, Type: 15
number of subkeys: 6
---------------------------------------------------------------
  2699   29 Jan 2024 Konstantin OlchanskiForuma scroll option for "add history variables" window?
familiar situation, "too much data", you dice t or slice it, still too much. BTW, you can try to generate history 
plot ODB entries from your program instead of from the history plot editor. K.O.
  2698   29 Jan 2024 Konstantin OlchanskiForumnumber of entries in a given ODB subdirectory ?
> https://chat.openai.com/share/d927c78d-9914-4413-ab5e-3b0e5d173132
> 
> Please note that you never can be 100% sure that the code from a LLM is correct

yup, it's wrong allright. it should be looping until db_enum_key() returns "no more keys",
not from 0 to N. this is same as iterating over unix filesystem directory entries, opendir(),
loop readdir() until it returns EOF, closedir().

K.O.
  2697   29 Jan 2024 Pavel MuratForuma scroll option for "add history variables" window?
> If you have some ideas on how to better present 100500 history variables, please shout out!

let me share some thoughts. In a particular case which lead to the original posting, 
I was using a multi-threaded driver and monitoring several pieces of equipment with different device drivers.  
In fact, it was not even hardware, but processes running on different nodes of a distributed computer farm.
To reduce the number of frontends, I was combining together the output of what could've been implemented 
as multiple slow control drivers and got 100+ variables in the list - hence the scrolling experience.

At the same time, a list of control variables per driver could've been kept relatively short.

So if a list of control variables of a slow control frontend were split in a History GUI not only by the 
equipment piece, but within the equipment "folder", also by the driver, that might help improving 
the scalability of the graphical interface. 

May be that is already implemented and it is just a matter of me not finding the right base class / example 
in the MIDAS code

-- regards, Pasha
  2696   29 Jan 2024 Pavel MuratForumnumber of entries in a given ODB subdirectory ?
Hi Stefan, Konstantin, 

thanks a lot for your responses - they are very teaching and it is good to have them archived in the forum.
 
Konstantin, as Stefan already noticed, in this particular case the race condition is not really a concern.

Stefan, the ChatGPT-generated code snippet is awesome! (teach a man how to fish ...)

-- regards, Pasha
  2695   28 Jan 2024 Stefan RittBug ReportWarnings about ODB keys that haven't been touched for 10+ years
> Please run "git blame" to find out who added that check.

OK ok, was me. But actually 2003. I hope that this being more than 20y ago excuses me not remembering it ;-)

> I think this age check should be removed, but there must be *some* check for invalid/bogus timestamps. Or 
> not, we should check if MIDAS cares about timestamps at all, if ODB functions never use/look at timestamp, 
> maybe we are okey with bogus timestamps. They may look funny in the odb editor, but that's it.

I changed the code to only check for timestamps more than 1h in the future and then complain. This should
avoid glitches when switching daylight savings time.

Stefan
  2694   28 Jan 2024 Stefan RittForumnumber of entries in a given ODB subdirectory ?
I guess you won't change your FPGA configuration just when you start a run, so I don't consider the race
condition very crucial (although KO is correct, it it there).

I guess rather than any pseudo code you want to see real working code (db_get_num_entries() does not exist!), right?

The easiest these day is to ask ChatGPT. MIDAS has been open source since a long time, so it has been used
to train modern Large Language Models. Attached is the result. Here is the direct link from where you can
copy the code:

https://chat.openai.com/share/d927c78d-9914-4413-ab5e-3b0e5d173132

Please note that you never can be 100% sure that the code from a LLM is correct, so always compile and debug it.
But nevertheless, it's always easier to start from some existing code, even if there is a danger that it's not perfect.

Best,
Stefan
Attachment 1: Screenshot_2024-01-29_at_07.20.50.png
Screenshot_2024-01-29_at_07.20.50.png
  2693   28 Jan 2024 Konstantin OlchanskiForumthe logic of handling history variables ?
MIDAS history is very simple:

from your frontend, your write your history data to ODB /eq/xxx/variables (see below)
mlogger has a hotlink to all /eq/*/variables and it will "see" the new data, write it to history file (see below)
you should see the history file grow using "ls"

history web page in your browser sends a "give me more data" JSON-RPC request to mhttpd
mhttpd looks at the history file, if there is new data (file got bigger) it send it to the web page
web page shows the new data.

where things usually go wrong:

- mlogger only looks for new history variables on startup and on begin-of-run. if you add new stuff in your frontend, you 
will not see it until you restart mlogger or start a new run.
- mlogger only looks at history data if corresponding "/eq/xxx/common/log history" is non-zero. for best effect, set it to 
"1". (or "0" to turn history off).
- history file is not growing, likely mlogger does not "see" your new data
- timestamps of stuff in /eq/xxx/variables are not getting updated, likely frontend is not writing them, and there is no 
new data for mlogger to "see" and write to file.

Frontend has several ways of writing to /eq/xxx/variables:

- write to ODB directly using ODB API db_set_data(), mvodb->Wx(), etc. this is the most foolproof method. use in 
conjunction with a printf() statement to make sure you actually do write to ODB. Sometimes your frontend event loop fails 
to run, a bug/failure that has nothing to do with midas history.

- generate a midas event and set the per-equipment "write event to ODB" flag (RO_ODB for mfe.c frontends), the mfe/tmfe 
framework will write event data to ODB, each data bank will be written to /eq/xxx/variables/BANKNAME, data type is taken 
from the event data bank definition.

This second method sometimes malfunctions, typical problems are missing RO_ODB in the equipment table, equipment table in 
ODB overwriting the value in source code (this is confusing in mfe.c frontends).

Least likely failure is "/eq/xxx/common/log history" set to bogus value. Normal values are 0=history disabled, 1=history 
enabled, other values are only needed if you do not want mlogger to record history as often as you generate it, i.e. you 
update /eq/xxx/variables every 1/sec, but you want mlogger to only record it 1/minute.

I hope this helps.

P.S. I notice your equipment tables do not have RO_ODB, so if you use the 2nd method, write history via event data banks, 
it will not work.


K.O.


> Dear MIDAS developers,
> 
> I'm trying to understand handling of the history (slow control) variables in MIDAS,
> and it seems that the behavior I'm observing is somewhat counterintuitive. 
> Most likely, I just do not understand the implemented logic.
> 
> As it it rather difficult to report on the behavior of the interactive program,
> I'll describe what I'm doing and illustrate the report with the series of attached 
> screenshots showing the history plots and the status of the run control at different 
> consecutive points in time.
> 
> Starting with the landscape:
> 
> - I'm running MIDAS, git commit=30a03c4c (the latest, as of today).
> 
> - I have built the midas/examples/slowcont frontend with the following modifications.
>   (the diffs are enclosed below):
>   
>   1) the frequency of the history updates is increased from 60sec/10sec to 6sec/1sec
>      and, in hope to have updates continuos, I replaced (RO_RUNNING | RO_TRANSITIONS)
>      with RO_ALWAYS.
> 
>   2) for convenience of debugging, midas/drivers/nulldrv.cxx is replaced with its clone,
>      which instead of returning zeroes in each channel, generates a sine curve:
> 
>                   V(t) = 100*sin(t/60)+10*channel
> 
> - an active channel in /Logger/History is chosen to be FILE
> 
> - /History/LoggerHistoryChannel is also set to FILE 
> 
> - I'm running mlogger and modified, as described, 'scfe' frontend from midas/examples/slowcont
> 
> - the attached history plots include three (0,4 and 7) HV:MEASURED channels
> 
> 
> Now, the observations:
> 
> 1) the history plots are updated only when a new run starts, no matter how hard
>    I'm trying to update them by clicking on various buttons.
> 
>    The attached screenshots show the timing sequence of the run control states
>    (with the times printed) and the corresponding history plots. 
> 
>    The "measured voltages" change only when the next run starts - the voltage graphs 
>    break only at the times corresponding to the vertical green lines.
> 
> 2) No matter for how long I wait within the run, the history updates are not happening.
> 
> 3) if the time difference between the two run starts gets too large,
>    the plotted time dependence starts getting discontinuities
> 
> 4) finally, if I switch the logging channel from FILE to MIDAS (activate the MIDAS
>    channel in /Logger/History and set /History/LoggerHistoryChannel to MIDAS),
>    the updates of the history plots simply stop.
> 
> MIDAS feels as a great DAQ framework, so I would appreciate any suggestion on 
> what I could be doing wrong. I'd also be happy to give a demo in real time 
> (via ZOOM/SKYPE etc).
> 
> -- much appreciate your time, thanks, regards, Pasha
>     
> ------------------------------------------------------------------------------
> diff --git a/examples/slowcont/scfe.cxx b/examples/slowcont/scfe.cxx
> index 11f09042..c98d37e8 100644
> --- a/examples/slowcont/scfe.cxx
> +++ b/examples/slowcont/scfe.cxx
> @@ -24,9 +24,10 @@
>  #include "mfe.h"
>  #include "class/hv.h"
>  #include "class/multi.h"
> -#include "device/nulldev.h"
>  #include "bus/null.h"
>  
> +#include "nulldev.h"
> +
>  /*-- Globals -------------------------------------------------------*/
>  
>  /* The frontend name (client name) as seen by other MIDAS clients   */
> @@ -74,11 +75,11 @@ EQUIPMENT equipment[] = {
>       0,                         /* event source */
>       "FIXED",                   /* format */
>       TRUE,                      /* enabled */
> -     RO_RUNNING | RO_TRANSITIONS,        /* read when running and on transitions */
> -     60000,                     /* read every 60 sec */
> +     RO_ALWAYS,        /* read when running and on transitions */
> +     6000,                     /* read every 6 sec */
>       0,                         /* stop run after this event limit */
>       0,                         /* number of sub events */
> -     10000,                     /* log history at most every ten seconds */
> +     1000,                     /* log history at most every one second */
>       "", "", ""} ,
>      cd_hv_read,                 /* readout routine */
>      cd_hv,                      /* class driver main routine */
> @@ -93,8 +94,8 @@ EQUIPMENT equipment[] = {
>       0,                         /* event source */
>       "FIXED",                   /* format */
>       TRUE,                      /* enabled */
> -     RO_RUNNING | RO_TRANSITIONS,        /* read when running and on transitions */
> -     60000,                     /* read every 60 sec */
> +     RO_ALWAYS,        /* read when running and on transitions */
> +     6000,                     /* read every 6 sec */
>       0,                         /* stop run after this event limit */
>       0,                         /* number of sub events */
>       1,                         /* log history every event as often as it changes (max 1 Hz) */
> ------------------------------------------------------------------------------
> [test_001]$ diff ../midas/examples/slowcont/nulldev.cxx ../midas/drivers/device/nulldev.cxx 
> 13d12
> < #include <math.h>
> 150,154c149,150
> <    if (channel < info->num_channels) {
> <      // *pvalue = info->array[channel];
> <      time_t t = time(NULL);;
> <      *pvalue = 100*sin(M_PI*t/60)+10*channel;
> <    }
> ---
> >    if (channel < info->num_channels)
> >       *pvalue = info->array[channel];
> ------------------------------------------------------------------------------
  2692   28 Jan 2024 Konstantin OlchanskiForumslow control frontends - how much do they sleep and how often their drivers are called?
> I have implemented a number of slow control frontends which are directed to update the 
> history once in every 10 sec, and they do just that. 

I suggest that you switch from the old mfe.c frontend framework to the new tmfe framework that was 
designed to solve exactly this type of problems.

Look at .../midas/progs/tmfe_example*.cxx

You have a choice of:
- single threaded frontend, most robust, no race conditions, but readout is interrupted during 
begin/end of run.
- two-threaded frontend, your periodic equipments run in one thread, midas loop and rpc run in a 
different thread, you have to handle locking yourself.
- you can run each of your equipments in it's own thread without help from the framework, it is 
obvious how to do it if you can program c++ threads, "new std::thread" to create/start a thread, 
stop threads using a binary flag, thread->join() to reap them at the end (or thread sanitizer will 
complain).

K.O.
  2691   28 Jan 2024 Konstantin OlchanskiForumdump history FILE files
$ cat mhf_1697445335_20231016_run_transitions.dat
event name: [Run transitions], time [1697445335]
tag: tag: /DWORD 1 4 /timestamp
tag: tag: UINT32 1 4 State
tag: tag: UINT32 1 4 Run number
record size: 12, data offset: 1024
...

data is in fixed-length record format. from the file header, you read "record size" is 12 and data starts at offset 1024.

the 12 bytes of the data record are described by the tags:
4 bytes of timestamp (DWORD, unix time)
4 bytes of State (UINT32)
4 bytes of "Run number" (UINT32)

endianess is "local endian", which means "little endian" as we have no big-endian hardware anymore to test endian conversions.

file format is designed for reading using read() or mmap().

and you are right mhdump, does not work on these files, I guess I can write another utility that does what I just described and spews the numbers to stdout.

K.O.
  2690   28 Jan 2024 Konstantin OlchanskiForuma scroll option for "add history variables" window?
> > Have you updated to the current midas version? This issue has been fixed a while ago. 
> 
> Hi Stefan, thanks a lot! I pulled from the head, and the scrolling works now. -- regards, Pasha

Right, I remember running into this problem, too.

If you have some ideas on how to better present 100500 history variables, please shout out!

K.O.
  2689   28 Jan 2024 Konstantin OlchanskiForumHistory tags
> This part of the system has been designed by KO, so he should reply here.

That's right. Some of this stuff is historical gibberish that is no longer needed 
for FILE and SQL histories.

/History/Events is needed to create persistent mapping between history event names 
and history event id's (at some point history event id was same equipment event 
id, with the obvious problems when equipment event ids are duplicated, reused, 
renamed, deleted).

/History/Tags was used by the history editor to speed up "give me all tag names 
for this history event name". With the "MIDAS" history storage this required 
reading a lot of data from disk. With the "FILE" history and cached ZFS SSD, disk 
access is much cheaper and caching history event names and tags in odb is no 
longer necessary.

/History/Tags should probably be removed (be check that nobody uses it first).

/History/Events has to remain as long as "MIDAS" history storage is still used.

K.O.
  2688   28 Jan 2024 Konstantin OlchanskiBug ReportWarnings about ODB keys that haven't been touched for 10+ years
> I don't immediately see a reason for saying that if a DB key is older than 10 yrs, it may not be valid.
> 
> However, it would be worth learning what was the logic behind choosing 10 yrs as a threshold. 
> If 10 is just a more or less arbitrary number, changing 10 --> 100 seems to be the way to go.

Please run "git blame" to find out who added that check.

If I remember right, it was added to complain/correct dates in the future.

I think the oldest experiment at TRIUMF where we still can load an odb into current MIDAS is TWIST,
now about 25 years old. the purpose of loading odb would be to test the history function
to see if we can look at 10-15 year old histories. (TWIST history is in the latest FILE format,
so it will load).

I think this age check should be removed, but there must be *some* check for invalid/bogus timestamps. Or 
not, we should check if MIDAS cares about timestamps at all, if ODB functions never use/look at timestamp, 
maybe we are okey with bogus timestamps. They may look funny in the odb editor, but that's it.

K.O.
  2687   28 Jan 2024 Konstantin OlchanskiForumnumber of entries in a given ODB subdirectory ?
Very good question. It exposes a very nasty problem, the race condition between "ls" and "rm". While you are 
looping over directory entries, somebody else is completely permitted to remove one of the files (or add more 
files), making the output of "ls" incorrect (contains non-existant/removed files, does not contain newly added 
files). even the simple count of number of files can be wrong.

Exactly the same problem exists in ODB. As you loop over directory entries, some other ODB client can remove or 
add new entries.

To help with this, I considered adding an db_ls() function that would take the odb lock, atomically iterate over 
a directory and return an std::vector<std::string> with names of all entries. (current odb iterator returns ODB 
handles that may be invalid if corresponding entry was removed while we were iterating). Unfortunately the 
delete/add race condition remains, some returned entries may be invalid or missing.

For your specific application, you can swear that you will never add/delete files "at the wrong time", and you 
will not see this problem until one of your users writes a script that uses odbedit to add/remove subdirectory 
entries exactly at the wrong time. (you run your "ls" in the BeginRun() handler of your frontend, they run their 
"rm" from their's, so both run at the same time, a race condition.

Closer to your question, I think it is simplest to always iterate over the subdirectory, collect names of all 
entries, then work with them:

std::vector<std::string> names;
iterate over odb {
  names.push_back(name);
}
foreach (name in names)
  work_on(name);

instead of:

size_t n = db_get_num_entries();
for (size_t i=0; i<n; i++) {
   std::string name = sprintf("FPGA%d", i);
   work_on(name);
}

K.O.


> Dear MIDAS experts, 
> 
> - I have a detector configuration with a variable number of hardware components - FPGA's receiving data 
>   from the detector. They are described in ODB using a set of keys ranging 
>   from "/Detector/FPGAs/FPGA00" .... to "/Detector/FPGAs/FPGA68".
>   Each of "FPGAxx" corresponds to an ODB subdirectory containing parameters of a given FPGA. 
> 
>   The number of FPGAs in the detector configuration is variable - [independent] commissioning 
>   of different detector subsystems involves different number of FPGAs.
> 
>   In the beginning of the data taking one needs to loop over all of "FPGAxx", 
>   parse the information there and initialize the corresponding FPGAs.
> 
> The actual question sounds rather trivial - what is the best way to implement a loop over them? 
> 
> - it is certainly possible to have the number of FPGAs introduced as an additional configuration parameter, 
>   say, "/Detector/Number_of_FPGAs", and this is what I have resorted to right now.
> 
>   However, not only that loooks ugly, but it also opens a way to make a mistake 
>   and have the Number_of_FPGAs, introduced separately, different from the actual number 
>   of FPGA's in the detector configuration.
>  
> I therefore wonder if there could be a function, smth like 
> 
>     int db_get_n_keys(HNDLE hdb, HNDLE hKeyParent)
> 
> returning the number of ODB keys with a common parent, or, to put it simpler, 
> a number of ODB entries in a given subdirectory.
> 
> And if there were a better solution to the problem I'm dealing with, knowing it might be helpful 
> for more than one person - configuring detector readout may require to deal with a variable number 
> of very different parameters.
> 
> -- many thanks, regards, Pasha
  2686   28 Jan 2024 Pavel MuratForumnumber of entries in a given ODB subdirectory ?
Dear MIDAS experts, 

- I have a detector configuration with a variable number of hardware components - FPGA's receiving data 
  from the detector. They are described in ODB using a set of keys ranging 
  from "/Detector/FPGAs/FPGA00" .... to "/Detector/FPGAs/FPGA68".
  Each of "FPGAxx" corresponds to an ODB subdirectory containing parameters of a given FPGA. 

  The number of FPGAs in the detector configuration is variable - [independent] commissioning 
  of different detector subsystems involves different number of FPGAs.

  In the beginning of the data taking one needs to loop over all of "FPGAxx", 
  parse the information there and initialize the corresponding FPGAs.

The actual question sounds rather trivial - what is the best way to implement a loop over them? 

- it is certainly possible to have the number of FPGAs introduced as an additional configuration parameter, 
  say, "/Detector/Number_of_FPGAs", and this is what I have resorted to right now.

  However, not only that loooks ugly, but it also opens a way to make a mistake 
  and have the Number_of_FPGAs, introduced separately, different from the actual number 
  of FPGA's in the detector configuration.
 
I therefore wonder if there could be a function, smth like 

    int db_get_n_keys(HNDLE hdb, HNDLE hKeyParent)

returning the number of ODB keys with a common parent, or, to put it simpler, 
a number of ODB entries in a given subdirectory.

And if there were a better solution to the problem I'm dealing with, knowing it might be helpful 
for more than one person - configuring detector readout may require to deal with a variable number 
of very different parameters.

-- many thanks, regards, Pasha
ELOG V3.1.4-2e1708b5