Back Midas Rome Roody Rootana
  Midas DAQ System, Page 128 of 154  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
ID Date Authordown Topic Subject
  3082   22 Sep 2025 Konstantin OlchanskiSuggestionGet manalyzer to configure midas::odb when running offline
> I will work today on the odbxx API to make sure there are no memory leaks when you switch form one file to another. I talked to KO so he agreed that yo then commit your proposed change of manalyzer 

That, and add a "clear()" method that resets odbxx state to "empty". I will call odbxx.clear() everywhere where I call "delete fOdb;" (TARunInfo::dtor and other places).

K.O.
  3083   22 Sep 2025 Konstantin OlchanskiSuggestionGet manalyzer to configure midas::odb when running offline
> > ....Before commit of this patch, can you confirm the RunInfo destructor
> > deletes this ODB stuff from odbxx? manalyzer takes object life times very seriously.
> 
> The call stores the ODB string in static members of the midas::odb class. So these will have a lifetime of the process or until they're replaced by another 
> call. When a midas::odb is instantiated it reads from these static members and then that data has the lifetime of that instance.

this is the behavious we need to modify.

> > Of course with this patch extending manalyzer to process two or more runs at the same time becomes impossible.
> Yes, I hadn't realised that was an option.

It is an option I would like to keep open. Not too many use cases, but imagine a "split brain" experiment
that has two MIDAS instances record data into two separate midas files. (if LIGO were to use MIDAS,
consider LIGO Hanford and LIGO Livingston).

Assuming data in these two data sets have common precision timestamps,
our task is to assemble data from two input files into single physics events. The analyzer will need
to read two input files, each file with it's run number, it's own ODB dump, etc, process the midas
events (unpack, calibrate, filter, etc), look at the timestamps, assemble the data into physics events.

This trivially generalizes into reading 2, 3, or more input files.

> For that to work I guess the aforementioned static members could be made thread local storage, and 
> processing of each run kept to a specific thread. Although I could imagine user code making assumptions and breaking, like storing a midas::odb as a 
> class member or something.

manalyzer is already multithreaded, if you will need to keep track of which thread should see which odbxx global object,
seems like abuse of the thread-local storage idea and intent.

> Note that I missed doing the same for the end of run event, which should probably also be added.

Ideally, the memory sanitizer will flag this for us, complain about anything that odbxx.clear() failes to free.

K.O.
  3084   22 Sep 2025 Konstantin OlchanskiInfoswitch midas to next c++
As part of discussions with Stefan during the MIDAS workshop,
an update to supported versions of c++ on different platforms.

- el6/SL6 is gone, gcc 4.4.7, no c++11
- el7/CentOS-7 is out the door, gcc 4.8.5, full c++11, no c++14, no c++17
- el8 was a still born child of RedHat
- el9  - gcc 11.5 with 12, 13, and 14 available, incomplete c++26, c++23, experimental c++20, default c++17
- el10 - gcc 14.2, incomplete c++26, c++23, experimental c++20, default c++17

- U-18 - gcc  7.5, experimental, incomplete c++17
- U-20 - gcc  9.4 default, 10.5 available, experimental, incomplete c++17
- U-22 - gcc 11.4 default, 12.3 available, default c++17
- U-24 - gcc 13.3 default, 14.2 available, default c++17

- D-11 - gcc 10.2, experimental c++17
- D-12 - gcc 12.2, default c++17

- MacOS 15.2 - llvm/clang 16, default c++17
- MacOS 15.7 - llvm/clang 17, default c++17

Taken from C++ level support:

GCC C++ support: https://gcc.gnu.org/projects/cxx-status.html
LLVM clang c++ support: https://clang.llvm.org/cxx_status.html
GLIBC c++ support: https://gcc.gnu.org/onlinedocs/libstdc++/manual/status.html

This suggests that c++17 is available on all current platforms.

K.O.

P.S. ROOT moved to C++17 as of release 6.30 on November 6, 2023.
  3087   22 Sep 2025 Konstantin OlchanskiInfoswitch midas to c++17
Following discussions at the MIDAS workshop, we propose to move MIDAS from c++11 to c++17. There is 
many new features and we want to start using some of them.

Per my previous message https://daq00.triumf.ca/elog-midas/Midas/3084,
c++17 is available on current MacOS, U-22 and newer, el9 and newer, D-12 and newer.

(ROOT moved to C++17 as of release 6.30 on November 6, 2023)

As I reported earlier, MIDAS already builds with c++23 on U-24, and this move does not require any 
actual code changes other than a bump of c++ version in CMakeLists.txt and Makefile.

Please let us know if this change will cause problems or if you think that we should move to an older 
c++ (c++14) or newer c++ (c++20 or c++23 or c++26).

If we do not hear anything, we will implement this change in about 2-3 weeks.

K.O.
  3088   22 Sep 2025 Konstantin OlchanskiInforemoval of ROOT support in mlogger
Historically, building MIDAS with ROOT caused us many problems - build failures because of c++ version 
mismatch, CFLAGS mismatch; run-time failures because of ROOT library mismatches; etc, etc.

Following discussions at the MIDAS Workshop, we think we should finally bite the bullet and remove ROOT 
support from MIDAS:

- remove support for writing data in ROOT TTree format in mlogger (rmlogger)
- remove support for ROOT in mana.c based analyzer (which itself we propose to remove)
- remove ROOT helper functions in rmidas.h
- remove ROOT support in cmake
- remove rmlogger and rmana

This change will not affect the rootana and manalyzer packages, they will continue to be built with 
ROOT support if ROOT is available.

Right now, we cannot remember any experiment that uses the ROOT TTree output function in mlogger. 

If you use this feature, we very much would like to hear from you. Please contact Stefan or myself or 
reply to this message.

As replacement for rmlogger, we could implement identical or similar functionality using an manalyzer-
based custom logger, but we need at least one user who can test it for us and confirm that it works 
correctly.

K.O.
  3089   22 Sep 2025 Konstantin OlchanskiInfoobsolete mana.c removal
Following discussions at the MIDAS workshop and the proposed removal of support for ROOT, the very obsolete mana.c 
analyzer framework has reached the end of the line.

Right now we cannot remember any experiment that uses a mana.c based analyzer. Most experiments use analyzers based 
on the rootana package (developed for ALPHA-1 at CERN) and on the manalyzer package (developed for ALPHA-2 and ALPHA-
g at CERN, with multithreading support contributed by Joseph McKenna).

If you know of any experiment that uses a mana.c based analyzer, please let us know. We can help with building it 
using an outside-of-midas local copy of mana.c or help with migration to a newer framework (or migration to a 
framework-free standalone analyzer).

If we do not hear from anybody, we will remove mana.c (and rmana) at the same time as we remove ROOT support from 
mlogger (rmlogger).

K.O.
  3091   23 Sep 2025 Konstantin OlchanskiInfoobsolete mana.c removal
> Hi, at the LEM Experiment at PSI, we still use mana.c and would like to keep it until end of 2026, where we will enter a long shutdown.

Excellent, good to hear from you! Once we remove ROOT support rmana.o will be gone, only mana.o (no ROOT) will remain. Will this break your builds?

One solution could be to copy mana.c from MIDAS into your source tree and compile/link it from there (not from MIDAS).

Perhaps the way to proceed is create a test branch with ROOT and mana.c removed, you can try it,
report success/fail and we go from there.

We should schedule this work for when both of us have a block of free time to work on it.

K.O.
  3092   23 Sep 2025 Konstantin OlchanskiInfo64-bit time_t
To record discussion with Stefan regarding 64-bit time_t

To remember:

signed   32-bit time_t will overflow in 2038 (soon enough)
unsigned 32-bit time_t will overflow in 2106 ("not my problem")

https://en.wikipedia.org/wiki/Year_2038_problem
https://wiki.debian.org/ReleaseGoals/64bit-time

64-bit Linux uses 64-bit time_t since as far back as el6.

MIDAS uses unsigned 32-bit (DWORD) time-in-seconds in many places (ODB, event 
headers, event buffers, etc)

MIDAS also uses unsigned 32-bit (DWORD) time-in-milliseconds in many places.

All time arithmetic is done using unsigned 32-bit math, these time calculations
are good until year 2106, at which time they will wrap around
(but still work correctly).

So we do not need to do anything, but...

To reduce confusion between the different time types, we will probably
introduce a 32-bit-time-in-seconds data type (i.e. time32_t alias for uint32_t),
and a 32-bit-time-in-milliseconds data type (i.e. millitime32_t alias for 
uint32_t). Also rename ss_time() to ss_time32() and ss_millitime() to 
ss_millitime32().

This should help avoiding accidental mixing of MIDAS 32-bit time, system 64-bit 
time and MIDAS 32-bit-time-in-milliseconds.

(confusion between time-in-seconds and time-in-milliseconds happened several 
times in MIDAS code).

There will be additional discussion and announcements if we go ahead with these 
changes.

K.O.
  3094   23 Sep 2025 Konstantin OlchanskiInfolong history variable names
To record discussion with Stefan about long history variable names.

We have several requests to remove the 32-byte limit on history variable names.

Presently, history variable names are formed from two 32-byte strings: history event name and 
history tag name:

* history event name is usually same as the equipment name (also a 32-byte string)
* history tag name is composed from /eq/xxx/variables/yyy name (also a 32-byte string) or from 
names in /eq/xxx/variables/names and "names zzz", which can have arbitrary length (and tag name 
would have to be truncated).

This worked well for "per-equipment" history, history events corresponded to equipment/variables 
and all data from equipment/variables were written together in one go. (this very inefficient if 
values of only one variable is updated).

Then at some point we implemented "per-variable" history:

* history event name is a equipment name (32-byte string) plus /eq/xxx/variables/vvv variable name 
(also 32-byte string). (obviously truncation is quite possible)
* history tag name is unchanged (also can be truncated)

With "per-variable" history, history events correspond to individual variables (ODB entries) in 
/eq/xxx/variables. If value of one variable is updated, only that variable is written to ODB. This 
is much more efficient. (If variable is an array, the whole array is written, is variable is a 
subdirectory, the whole subdirectory is written).

We considered even finer granularity, writing to history file only the one value that changed, but 
decided against slicing the data too fine. (for arrays, MIDAS frontends usually update all values 
of an array, as in "array of 10 temperatures" or "array of 4000 high voltages").

Many years later, we have the SQL history and the FILE history which do not have the 32-byte limit 
on history event names and history tag names (no limit in the MIDAS C++ code. MySQL and PgSQL have 
limits on table name and column name lengths, 64 bytes and 31 bytes respectively, best I can 
tell).

But the API still uses the MIDAS "struct TAG" with the 32-byte tag name length limit.

It is pretty easy to change the API to use a new "struct TAG_CXX" with std::string unlimited-
length tag names, but the old MIDAS history will be unable to deal with long names. Hence
the discussion about removing the old MIDAS history and keeping only the FILE and SQL history 
(plus the mhdump and mh2sql tools to convert old history files to the new formats).

(some code in mhttpd may need to be corrected for long history names. javascript code should be 
okey, history plot code may need adjustment to display pathologically long names. use small font, 
truncate, etc).

K.O.
  3095   23 Sep 2025 Konstantin OlchanskiInfoswitch midas to c++17
> perhaps c++20? - std::format is an immediately useful feature. --regards, Pasha

confirmed. std::format is an improvement over K&R C printf().

but seems unavailable on U-20 and older, requires --std=c++20 on U-24 and MacOS.

but also available as a standalone library: https://github.com/fmtlib/fmt

myself, I use printf() and msprintf(), I think std::format is in the "too little, too late to save C++" 
department.

K.O.
  Draft   26 Sep 2025 Konstantin OlchanskiSuggestionGet manalyzer to configure midas::odb when running offline
> > ...I talked to KO so he agreed that yo then commit your proposed change of manalyzer 
> Merged and pushed.

negative. as I already 
  1833   14 Feb 2020 Konrad BrigglForumWritting Midas Events via FPGAs
Hello Stefan,
is there a difference for the later data processing (after writing the ring buffer blocks)
if we write single events or multiple in one rb_get_wp - memcopy - rb_increment_wp cycle?
Both Marius and me have seen some inconsistencies in the number of events produced that is reported in the status page when writing multiple events in one go,
so I was wondering if this is due to us treating the buffer badly or the way midas handles the events after that.

Given that we produce the full event in our (FPGA) domain, an option would be to always copy one event from the dma to the midas-system buffer in a loop.
The question is if there is a difference (for midas) between
[pseudo code, much simplified]

while(dma_read_index < last_dma_write_index){
  if(rb_get_wp(pdata)!=SUCCESS){
    dma_read_index+=event_size;
    continue;   
  }
  copy_n(dma_buffer, pdata, event_size);
  rb_increment_wp(event_size);
  dma_read_index+=event_size;
} 

and

while(dma_read_index < last_dma_write_index){
  if(rb_get_wp(pdata)!=SUCCESS){
     ...
  };
  total_size=max_n_events_that_fit_in_rb_block();
  copy_n(dma_buffer, pdata, total_size);
  rb_increment_wp(total_size);
  dma_read_index+=total_size;
}

Cheers,
Konrad

> The rb_xxx function are (thoroughly tested!) robust against high data rate given that you use them as intended:
> 
> 1) Once you create the ring buffer via rb_create(), specify the maximum event size (overall event size, not bank size!). Later there is no protection any more, so if you obtain pdata from rb_get_wp, you can of course write 4GB to pdata, overwriting everything in your memory, causing a total crash. It's your responsibility to not write more bytes into pdata then 
> what you specified as max event size in rb_create()
> 
> 2) Once you obtain a write pointer to the ring buffer via rb_get_wp, this function might fail when the receiving side reads data slower than the producing side, simply because the buffer is full. In that case the producing side has to wait until space is freed up in the buffer by the receiving side. If your call to rb_get_wp returns DB_TIMEOUT, it means that the 
> function did not obtain enough free space for the next event. In that case you have to wait (like ss_sleep(10)) and try again, until you succeed. Only when rb_get_wp() returns DB_SUCCESS, you are allowed to write into pdata, up to the maximum event size specified in rb_create of course. I don't see this behaviour in your code. You would need something 
> like
> 
> do {
>    status = rb_get_wp(rbh, (void **)&pdata, 10);
>    if (status == DB_TIMEOUT)
>       ss_sleep(10);
>    } while (status == DB_TIMEOUT);
> 
> Best,
> Stefan
> 
> 
> > Dear all,
> > 
> > we creating Midas events directly inside a FPGA and send them off via DMA into the PC RAM. For reading out this RAM via Midas the FPGA sends as a pointer where it has written the last 4kB of data. We use this pointer for telling the ring buffer of midas where the new events are. The buffer looks something like:
> > 
> > // event 1
> > dma_buf[0] = 0x00000001; // Trigger and Event ID
> > dma_buf[1] = 0x00000001; // Serial number
> > dma_buf[2] = TIME; // time
> > dma_buf[3] = 18*4-4*4; // event size
> > dma_buf[4] = 18*4-6*4; // all bank size
> > dma_buf[5] = 0x11; // flags
> > // bank 0
> > dma_buf[6] = 0x46454230; // bank name
> > dma_buf[7] = 0x6; // bank type TID_DWORD
> > dma_buf[8] = 0x3*4; // data size
> > dma_buf[9] = 0xAFFEAFFE; // data
> > dma_buf[10] = 0xAFFEAFFE; // data
> > dma_buf[11] = 0xAFFEAFFE; // data
> > // bank 1
> > dma_buf[12] = 0x1; // bank name
> > dma_buf[12] = 0x46454231; // bank name
> > dma_buf[13] = 0x6; // bank type TID_DWORD
> > dma_buf[14] = 0x3*4; // data size
> > dma_buf[15] = 0xAFFEAFFE; // data
> > dma_buf[16] = 0xAFFEAFFE; // data
> > dma_buf[17] = 0xAFFEAFFE; // data
> > 
> > // event 2
> > .....
> > 
> > dma_buf[fpga_pointer] = 0xXXXXXXXX;
> > 
> > 
> > And we do something like:
> > 
> > while{true}
> >    // obtain buffer space
> >    status = rb_get_wp(rbh, (void **)&pdata, 10);
> >    fpga_pointer = fpga.read_last_data_add();
> > 
> >    wlen = last_fpga_pointer - fpga_pointer; \\ in 32 bit words
> >    copy_n(&dma_buf[last_fpga_pointer], wlen, pdata);
> >    rb_status = rb_increment_wp(rbh, wlen * 4); \\ in byte
> > 
> >    last_fpga_pointer = fpga_pointer;
> > 
> > Leaving the case out where the dma_buf wrap around this works fine for a small data rate. But if we increase the rate the fpga_pointer also increases really fast and wlen gets quite big. Actually it gets bigger then max_event_size which is checked in rb_increment_wp leading to an error. 
> > 
> > The problem now is that the event size is actually not to big but since we have multi events in the buffer which are read by midas in one step. So we think in this case the function rb_increment_wp is comparing actually the wrong thing. Also increasing the max_event_size does not help.
> > 
> > Remark: dma_buf is volatile so memcpy is not possible here.
> > 
> > Cheers,
> > Marius
  357   02 Mar 2007 Kevin LynchForumevent builder scalability
> Hi there:
> I have a question if there's anybody out there running MIDAS with event builder
> that assembles events from more that just a few front ends (say on the order of
> 0x10 or more)?
> Any experiences with scalability?
> 
> Cheers
>  Piotr

Mulan (which you hopefully remember with great fondness :-) is currently running
around ten frontends, six of which produce data at any rate.  If I'm remembering
correctly, the event builder handles about 30-40MB/s.  You could probably ping Tim
Gorringe or his current postdoc Volodya Tishenko (tishenko@pa.uky.edu) if you want
more details.  Volodya solved a significant number of throughput related
bottlenecks in the year leading up to our 2006 run.
  1225   15 Dec 2016 Kevin GiovanettiBug Reportmidas.h error
creating a frontend on MAC Sierra OSX 10
include the midas.h file and when compiling with XCode I get an error based on
this entry in the midas.h include

#if !defined(OS_IRIX) && !defined(OS_VMS) && !defined(OS_MSDOS) &&
!defined(OS_UNIX) && !defined(OS_VXWORKS) && !defined(OS_WINNT)
#error MIDAS cannot be used on this operating system
#endif


Perhaps I should not use Xcode?
Perhaps I won't need Midas.h?

The MIDAS system is running on my MAC but I need to add a very simple front end
for testing and I encounted this error.
  1404   30 Oct 2018 Joseph McKennaBug ReportSide panel auto-expands when history page updates

One can collapse the side panel when looking at history pages with the button in
the top left, great! We want to see many pages so screen real estate is important

The issue we face is that when the page refreshes, the side panel expands. Can
we make the panel state more 'sticky'?

Many thanks
Joseph (ALPHA)

Version: 	2.1
Revision: 	Mon Mar 19 18:15:51 2018 -0700 - midas-2017-07-c-197-g61fbcd43-dirty
on branch feature/midas-2017-10
  1406   31 Oct 2018 Joseph McKennaBug ReportSide panel auto-expands when history page updates
> > 
> > 
> > One can collapse the side panel when looking at history pages with the button in
> > the top left, great! We want to see many pages so screen real estate is important
> > 
> > The issue we face is that when the page refreshes, the side panel expands. Can
> > we make the panel state more 'sticky'?
> > 
> > Many thanks
> > Joseph (ALPHA)
> > 
> > Version: 	2.1
> > Revision: 	Mon Mar 19 18:15:51 2018 -0700 - midas-2017-07-c-197-g61fbcd43-dirty
> > on branch feature/midas-2017-10
> 
> Hi Joseph,
> 
> In principle a page refresh should now not be necessary, since pages should reload automatically 
> the contents which changes. If a custom page needs a reload, it is not well designed. If necessary, I 
> can explain the details. 
> 
> Anyhow I implemented your "stickyness" of the side panel in the last commit to the develop branch.
> 
> Best regards,
> Stefan

Hi Stefan,

I apologise for miss using the word refresh. The re-appearing sidebar was also seen with the automatic
reload, I have implemented your fix here and it now works great!

Thank you very much!
Joseph
  Draft   14 Oct 2019 Joseph McKennaForumtmfe.cxx - Future frontend design
Hi,

I have been looking at the 2019 workshop slides, I am interested in the C++ future of MIDAS. 

I am quite interested in using the object oriented 


ALPHA will start data taking in 2021
  1727   18 Oct 2019 Joseph McKennaInfosysmon: New system monitor and performance logging frontend added to MIDAS

I have written a system monitor tool for MIDAS, that has been merged in the develop branch today: sysmon

https://bitbucket.org/tmidas/midas/pull-requests/8/system-monitoring-a-new-frontend-to-log/diff

To use it, simply run the new program
sysmon
on any host that you want to monitor, no configuring required.




The program is a frontend for MIDAS, there is no need for configuration, as upon initialisation it builds a history display for you. Simply run one instance per machine you want to monitor. By default, it only logs once per 10 seconds.

The equipment name is derived from the hostname, so multiple instances can be run across multiple machines without conflict. A new history display will be created for each host.

sysmon uses the /proc pseudo-filesystem, so unfortunately only linux is supported. It does however work with multiple architectures, so x86 and ARM processors are supported.

If the build machine has NVIDIA drivers installed, there is an additional version of sysmon that gets built: sysmon-nvidia. This will log the GPU temperature and usage, as well as CPU, memory and swap. A host should only run either sysmon or sysmon-nvidia

elog:1727/1 shows the History Display generated by sysmon-nvidia. sysmon would only generate the first two displays (sysmon/localhost and sysmon/localhost-CPU)
Attachment 1: sysmon-gpu.png
sysmon-gpu.png
  1746   03 Dec 2019 Joseph McKennaInfomfe.c: MIDAS frontend's 'Equipment name' can embed hostname, determined at run-time
A little advertised feature of the modifications needed support the msysmon program is 
that MIDAS equipment names can support the injecting of the hostname of the system 
running the frontend at runtime (register_equipment(void)).

https://midas.triumf.ca/MidasWiki/index.php/Equipment_List_Parameters#Equipment_Name

A special string ${HOSTNAME} can be put in any position in the equipment name. It will 
be replaced with the hostname of the computer running the frontend at run-time. Note, 
the frontend_name string will be trimmed down to 32 characters.
Example usage: msysmon


EQUIPMENT equipment[] = {

  { "${HOSTNAME}_msysmon",   /* equipment name */    {
      EVID_MONITOR, 0,      /* event ID, trigger mask */
      "SYSTEM",             /* event buffer */
      EQ_PERIODIC,          /* equipment type */
      0,                    /* event source */
      "MIDAS",              /* format */
      TRUE,                 /* enabled */
      RO_ALWAYS,            /* Read when running */
      10000,                /* poll every so milliseconds */
      0,                    /* stop run after this event limit */
      0,                    /* number of sub events */
      1,                    /* history period */
      "", "", ""
    },
    read_system_load,/* readout routine */
  },
  { "" }
};
  1891   01 May 2020 Joseph McKennaForumTaking MIDAS beyond 64 clients

Hi all,

I have been experimenting with a frontend solution for my experiment 
(ALPHA). The intention to replace how we log data from PCs running LabVIEW. 

I am at the proof of concept stage. So far I have some promising 
performance, able to handle 10-100x more data in my test setup (current 
limitations now are just network bandwith, MIDAS is impressively efficient). 

==========================================================================

Our experiment has many PCs using LabVIEW which all log to MIDAS, the 
experiment has grown such that we need some sort of load balancing in our 
frontend.

The concept was to have a 'supervisor frontend' and an array of 'worker 
frontend' processes. 
-A LabVIEW client would connect to the supervisor, then be referred to a 
worker frontend for data logging. 
-The supervisor could start a 'worker frontend' process as the demand 
required.

To increase accountability within the experiment, I intend to have a 'worker 
frontend' per PC connecting. Then any rouge behavior would be clear from the 
MIDAS frontpage.

Presently there around 20-30 of these LabVIEW PCs, but given how the group 
is growing, I want to be sure that my data logging solution will be viable 
for the next 5-10 years. With the increased use of single board computers, I 
chose the target of benchmarking upto 1000 worker frontends... but I quickly 
hit the '64 MAX CLIENTS' and '64 RPC CONNECTION' limit. Ok...

branching and updating these limits:
https://bitbucket.org/tmidas/midas/branch/experimental-beyond_64_clients

I have two commits. 
1. update the memory layout assertions and use MAX_CLIENTS as a variable
https://bitbucket.org/tmidas/midas/commits/302ce33c77860825730ce48849cb810cf
366df96?at=experimental-beyond_64_clients
2. Change the MAX_CLIENTS and MAX_RPC_CONNECTION
https://bitbucket.org/tmidas/midas/commits/f15642eea16102636b4a15c8411330969
6ce3df1?at=experimental-beyond_64_clients

Unintended side effects:
I break compatibility of existing ODB files... the database layout has 
changed and I read my old ODB as corrupt. In my test setup I can start from 
scratch but this would be horrible for any existing experiment.

Edit: I noticed 'make testdiff' pipeline is failing... also fails locally... 
investigating

Early performance results:
In early tests, ~700 PCs logging 10 unique arrays of 10 doubles into 
Equipment variables in the ODB seems to perform well... All transactions 
from client PCs are finished within a couple of ms or less

==========================================================================

Questions:

Does the community here have strong opinions about increasing the 
MAX_CLIENTS and MAX_RPC_CONNECTION limits? 
Am I looking at this problem in a naive way?


Potential solutions other than increasing the MAX_CLIENTS limit:
-Make worker threads inside the supervisor (not a separate process), I am 
using TMFE, so I can dynamically create equipment. I have not yet taken a 
deep dive into how any multithreading is implemented
-One could have a round robin system to load balance between a limited pool 
of 'worker frontend' proccesses. I don't like this solution as I want to 
able to clearly see which client PCs have been setup to log too much data

==========================================================================
ELOG V3.1.4-2e1708b5