Back Midas Rome Roody Rootana
  Midas DAQ System, Page 133 of 160  Not logged in ELOG logo
    Reply  05 Dec 2025, Konstantin Olchanski, Info, MIDAS RPC add support for std::string and std::vector<char> 
> > This is moving slowly. I now have RPC caller side support for std::string and 
> > std::vector<char>. RPC server side is next. K.O.
> The RPC_CXX code is now merged into MIDAS branch feature/rpc_call_cxx.
> This code fully supports passing std::string and std::vector<char> through the MIDAS RPC is both directions.

The RPC_CXX in now merged into MIDAS develop. commit 34cd969fbbfecc82c290e6c2dfc7c6d53b6e0121.

There is a new RPC parameter encoder and decoder. To avoid unexpected breakage, it is only used for newly added RPC_CXX 
calls, but I expect to eventually switch all RPC calls to use the new encoder and decoder.

As examples of new code, see RPC_JRPC_CXX and RPC_BRPC_CXX, they return RPC data in an std::string and std::vector<char> 
respectively, amount of returned data is unlimited, mjsonrpc parameter "max_reply_length" is no longer needed/used.

Also included of RPC_BM_RECEIVE_EVENT_CXX, it receives event data as an std::vector<char> and maximum event size is no 
longer limited, ODB /Experiment/MAX_EVENT_SIZE is no longer needed/used. To avoid unexpected breakage, this new code is not 
enabled yet.

K.O.
Entry  05 Dec 2025, Konstantin Olchanski, Bug Fix, update of JRPC and BRPC 
With the merge of RPC_CXX code, MIDAS RPC can now return data of arbitrary large size and I am 
proceeding to update the corresponding mjsonrpc interface.

If you use JRPC and BRPC in the tmfe framework, you need to do nothing, the updated RPC handlers 
are already tested and merged, the only effect is that large data returned by HandleRpc() and 
HandleBinaryRpc() will no longer be truncated.

If you use your own handlers for JRPC and BRPC, please add the RPC handlers as shown at the end 
of this message. There is no need to delete/remove the old RPC handlers.

To avoid unexpected breakage, the new code is not yet enabled by default, but you can start 
using it immediately by replacing the mjsonrpc call:

mjsonrpc_call("jrpc", ...

with

mjsonrpc_call("jrpc_cxx", ...

ditto for "brpc", see resources/example.html for complete code.

After migration is completed, if you have some old frontends where you cannot add the new RPC 
handlers, you can still call them using the "jrpc_old" and "brpc_old" mjsonrpc calls.

I will cut-over the default "jrpc" and "brpc" calls to the new RPC_CXX in about a month or so.

If you need more time, please let me know.

K.O.

Register the new RPCs:

   cm_register_function(RPC_JRPC_CXX, rpc_cxx_callback);
   cm_register_function(RPC_BRPC_CXX, binary_rpc_cxx_callback);

and add the handler functions: (see tmfe.cxx for full example)

static INT rpc_cxx_callback(INT index, void *prpc_param[])
{
   const char* cmd  = CSTRING(0);
   const char* args = CSTRING(1);
   std::string* pstr = CPSTDSTRING(2);

   *pstr = "my return data";

   return RPC_SUCCESS;
}

static INT binary_rpc_cxx_callback(INT index, void *prpc_param[])
{
   const char* cmd  = CSTRING(0);
   const char* args = CSTRING(1);
   std::vector<char>* pbuf = CPSTDVECTOR(2);

   pbuf->clear();
   pbuf->push_back(my return data);

   return RPC_SUCCESS;
}

K.O.
Entry  05 Dec 2025, Konstantin Olchanski, Info, address and thread sanitizers 
I added cmake support for the thread sanitizer (address sanitizer was already 
there). Use:

make cmake -j YES_THREAD_SANITIZER=1 # (or YES_ADDRESS_SANITIZER=1)

However, thread sanitizer is broken on U-24, programs refuse to start ("FATAL: 
ThreadSanitizer: unexpected memory mapping") and report what looks like bogus 
complaints about mutexes ("unlock of an unlocked mutex (or by a wrong thread)").

On macos, thread sanitizer does not report any errors or warnings or ...

P.S.

The Undefined Behaviour Sanitizer (UBSAN) complained about a few places where 
functions could have been called with a NULL pointer arguments, I added some 
assert()s to make it happy.

K.O.
    Reply  07 Dec 2025, Konstantin Olchanski, Suggestion, Get manalyzer to configure midas::odb when running offline 
>  #include "manalyzer.h"
>  #include "midasio.h"
> +#include "odbxx.h"

This commit broke the standalone ("no MIDAS") build of manalyzer. Either odbxx has to be an independant package 
(like mvodb) or it has to be conditioned on HAVE_MIDAS.

(this was flagged by failed bitbucket build of rootana)

K.O.
Entry  08 Dec 2025, Konstantin Olchanski, Bug Report, odbxx memory leak with JSON ODB dump 
I was testing odbxx with manalyzer, decided to print an odb value in every event, 
and it worked fine in online mode, but bombed out when running from a data file 
(JSON ODB dump). The following code has a memory leak. No idea if XML ODB dump 
has the same problem.

int memory_leak()
{
   midas::odb::set_odb_source(midas::odb::STRING, std::string(run.fRunInfo-
>fBorOdbDump.data(), run.fRunInfo->fBorOdbDump.size()));

   while (1) {
      int time = midas::odb("/Runinfo/Start time binary");
      printf("time %d\n", time);
   }
}

K.O.
    Reply  08 Dec 2025, Konstantin Olchanski, Suggestion, Get manalyzer to configure midas::odb when running offline 
> >  #include "manalyzer.h"
> >  #include "midasio.h"
> > +#include "odbxx.h"
> 
> This commit broke the standalone ("no MIDAS") build of manalyzer. Either odbxx has to be an independant package 
> (like mvodb) or it has to be conditioned on HAVE_MIDAS.
> 
> (this was flagged by failed bitbucket build of rootana)

Corrected. You can only use odbxx is manalyzer is built with HAVE_MIDAS. (mvodb is an independant package and is 
always available, no need to pull and build the full MIDAS).

Also notice how I now initialize odbxx from fBorOdbDump and fEorOdbDump. Also tested against multithreaded access, it 
works (as Stefan promised).

K.O.
    Reply  08 Dec 2025, Konstantin Olchanski, Suggestion, manalyzer root output file with custom filename including run number 
I updated the root helper constructor to give the user more control over ROOT output file names.

You can now change it to anything you want in the module run constructor, see manalyzer_example_esoteric.cxx

Is this good enough?

struct ExampleE1: public TARunObject
{
   ExampleE1(TARunInfo* runinfo)
      : TARunObject(runinfo)
   {
#ifdef HAVE_ROOT
      if (runinfo->fRoot)
         runinfo->fRoot->fOutputFileName = "my_custom_file_name.root";
#endif
   }
}

K.O.
    Reply  09 Dec 2025, Konstantin Olchanski, Bug Report, manalyzer fails to compile on some systems because of missing #include <cmath> 
> /code/midas/manalyzer/manalyzer.cxx:799:27: error: ‘pow’ was not declared in this scope
>   799 |       bins[i] = TimeRange*pow(1.1,i)/pow(1.1,Nbins);

math.h added, pushed. nice catch.

implicit include of math.h came through TFile.h (ROOT v6.34.02), perhaps you have a newer ROOT
and they jiggled the include files somehow.

TFile.h -> TDirectoryFile.h -> TDirectory.h -> TNamed.h -> TString.h -> TMathBase.h -> cmath -> math.h

K.O.
    Reply  09 Dec 2025, Konstantin Olchanski, Bug Report, odbxx memory leak with JSON ODB dump 
> Thanks for reporting this. It was caused by a
> 
>   MJsonNode* node = MJsonNode::Parse(str.c_str());
> 
> not followed by a 
> 
>   delete node;
> 
> I added that now in odb::odb_from_json_string(). Can you try again?
> 
> Stefan

Close, but no cigar, node you delete is not the node you got from Parse(), see "node = subnode;".

If I delete the node returned by Parse(), I confirm the memory leak is gone.

BTW, it looks like we are parsing the whole JSON ODB dump (200k+) on every odbxx access. Can you parse it just once?

K.O.
    Reply  11 Dec 2025, Konstantin Olchanski, Bug Report, odbxx memory leak with JSON ODB dump 
> > BTW, it looks like we are parsing the whole JSON ODB dump (200k+) on every odbxx access. Can you parse it just once?
> You are right. I changed the code so that the dump is only parsed once. Please give it a try.

Confirmed fixed, thanks! There are 2 small changes I made in odbxx.h, please pull.

K.O.
    Reply  11 Dec 2025, Konstantin Olchanski, Bug Report, manalyzer fails to compile on some systems because of missing #include <cmath> 
> TFile.h -> TDirectoryFile.h -> TDirectory.h -> TNamed.h -> TString.h -> TMathBase.h -> cmath -> math.h

reading ROOT release notes, 6.38 removed TMathBase.h from TString.h, with a warning "This change may cause errors during compilation of 
ROOT-based code". Upright citizens, nice guys!

> Thanks. Are you happy for me to update the submodule commit in Midas to use this fix? I should have sufficient permission if you agree.

I am doing some last minute tests, will pull it into midas and rootana later today.

K.O.
    Reply  12 Dec 2025, Konstantin Olchanski, Bug Report, odbxx memory leak with JSON ODB dump 
> > Confirmed fixed, thanks! There are 2 small changes I made in odbxx.h, please pull.
> There was one missing enable_jsroot in manalyzer, please pull yourself.

pulled, pushed to rootana, thanks for fixing it!

K.O.
Entry  05 Feb 2026, Konstantin Olchanski, Bug Report, omnibus bugs from running DarkLight 
We finished running the DarkLight experiment and I am reporting accumulated bugs that we have run into.

1) history plots on 12 hrs, 24 hrs tend to hang with "page not responsive". most plots have 16-20 variables, 
which are recorded at 1/sec interval. (yes, we must see all the variables at the same time and yes, we want to 
record them with fine granularity).

2) starting runs gets into a funny mode if a GEM frontend aborts (hardware problems), transition page reports 
"wwrrr, timeout 0", and stays stuck forever, "cancel transition" does nothing. observe it goes from "w" 
(waiting) to "r" (RPC running) without a "c" (connecting...) and timeout should never be zero (120 sec in 
ODB).

3) ODB editor clicking on hex number versus decimal number no longer allows editing in hex, Stefan implemented 
this useful feature and it worked for a while, but now seems broken.

4) ODB editor "right click" to "delete" or "rename" key does not work, the right-click menu disappears 
immediately before I can use it (dl-server-2), click on item (it is now blue), right-click menu disappears 
before I can use it (daq17). it looks like a timing or race condition.

5) ODB editor "create link" link target name is limited to 32 bytes, links cannot be created (dl-server-2), ok 
on daq17 with current MIDAS.

6) MIDAS on dl-server-2 is "installed" in such a way that there is no connection to the git repository, no way 
to tell what git checkout it corresponds to. Help page just says "branch master", git-revision.h is empty. We 
should discourage such use of MIDAS and promote our "normal way" where for all MIDAS binary programs we know 
what source code and what git commit was used to build them.

6a) MIDAS on dl-server-2 had a pretty much non-functional history display, I reported it here, Stefan provided 
a fix, I manually retrofitted it into dl-server-2 MIDAS and we were able to run the experiment. (good)

6b) bug (5) suggests that there is more bugs being introduced and fixed without any notice to other midas 
users (via this forum or via the bitbucket bug tracker).

K.O.
    Reply  16 Apr 2026, Konstantin Olchanski, Forum, Migrate Legacy code to current Midas version 
> I am migrating the full CRIPT muon tomography detector from MIDAS (SVN Rev. 
> 5238, circa 2012) to a more modern release.
> The current system runs on Scientific Linux 6 and very old hardware.

Right, good vintage midas and linux. But in the current security environment,
we must run currently supported OS (and MIDAS), and we must never fall off
the yearly/bi-yearly OS upgrade threadmill.

How old is your computer hardware and do you plan to update it, as well? If you OS
is installed on an HDD, definitely move to an SSD would be good. If you are hard
on money, a RaspberryPi5 with 16GB RAM may be good enough for what you have.

Anyhow new OS choice would be Ubuntu 24 or Debian 13. I do not recommend Red Hat based OS (vanilla 
RHEL, Fedora, Alma, Rocky), they have become niche OSes with minimal vendor and community support.

> Due to substantial changes in the MIDAS codebase over the years ...

The big change in MIDAS land is move to c++, then c++11, then to c++17, and move from vanilla make to 
cmake.

MIDAS API has been reasonably stable since then, but very old MIDAS frontends would fail to build with 
latest compilers because of changes in c++ language and changes in the c and c++ libraries.

> I have encountered multiple compatibility issues during the migration. I have also attempted to 
build and run the legacy MIDAS version and the front-end code using GCC 4.8 on a modern  Linux system 
(Ubuntu 24.04), but without success.

this is non-viable, latest C/C++ compilers reject perfectly good SL6-era C/C++ code, old MIDAS would 
not compile, old frontends would not compile.

> Could you please advise on the recommended approach ...

What you are doing, we have done several times with TRIUMF experiments,
updating SL6 and CentOS-7 MIDAS instances to current MIDAS, C++ and OS:

1) new computer with Ubuntu 24 (or Debian or Raspbian). (U-26 will come out roughly in August, fo rth 
epurposes of this discussion).
2) new MIDAS. we generally recommend the head of the develop branch, but older tagged version are 
okey, too.
3) apache https proxy, etc, for secure browser connections, see
https://daq00.triumf.ca/DaqWiki/index.php/Ubuntu#Install_apache_httpd_proxy_for_midas_and_elog
4) reload your old ODB into the new MIDAS
5) your old history, etc should work
6a) build your old frontends, this will be a chore, but if you look at the compile errors, you will 
see that most changes are very mechanical (i.e. const char*, etc). biggest hassle is ot make your old 
C/C++ code to build with current C++17 compiler, smaller hassle is to update for minor changes in the 
mfe.h API.
6b) bite the bullet and rewrite your frontends using the C++ tmfe API, start with 
tmfe_example_everything.cxx, remove unnecessary, add required, pretty straightforward, I can guide you 
through this (contact me directly by email).
7) minor tweeks to mlogger, mhttpd and history settings
8) rewrite all customs pages to the current mjsonrpc API

Best of luck, if you have more questions, please ask here or by direct email.

K.O.
Entry  16 Apr 2026, Konstantin Olchanski, Suggestion, mhttpd user permissions 
We had our periodic discussion on MIDAS web page user permissions. (I cannot 
find the link to the previous discussions, ouch!)

Currently any logged in user can do anything - start stop runs, start/stop 
programs, edit odb, etc.

Regularly, we have experiments that ask about "read-only" access to MIDAS and 
about more granular user permissions.

In the past, I suggested a permissions scheme that is easy to implement
with the current code base. Permission level for each user can
be stored in ODB and allow:

level 0 - root user, as now
level 1 - experiment user, any restrictions are implemented in javascript, i.e. 
all custom pages work as they do now, but (i.e.) the odb editor is read-only
level 2 - experiment operator, restrictions are implemented in the mjsonrpc 
code, i.e. can start/stop runs, start/stop programs, but cannot make any 
changes, i.e. cannot write to ODB
level 3 - read-only user - only mjsonrpc calls that do not change anything are 
permitted.

(to implement level 2, obviously, the "start run" mjsonrpc call has to be 
changed to accept the run comments, current code writes them to odb directly and 
that would fail).

First step towards implementing this was made today. Ben and Derek figured out 
the apache incantation to pass the logged user name to MIDAS and I added 
decoding of this user name in mhttpd. I do not do anything with it, yet.

In apache config, one change is needed:

> For Apache, add this line in your VirtualHost section (tested as working):
> RequestHeader set X-Remote-User %{REMOTE_USER}s

https://daq00.triumf.ca/DaqWiki/index.php/Ubuntu#Install_apache_httpd_proxy_for_midas_and_elog

K.O.
    Reply  24 Apr 2026, Konstantin Olchanski, Bug Report, increasing the max number of hot links in ODB 
> when I attempted to increase the max number of hotlinks in ODB , defined as 
> #define MAX_OPEN_RECORDS       256           /**< number of open DB records   */
> assert(sizeof(DATABASE_CLIENT) == 2112);

Yes, it is intended to work like this. If you change MAX_OPEN_RECORDS (and some settings),
you break binary compatibility with standard MIDAS and the asserts inform you about it.

It is not a light step to take - you have to recompile all MIDAS clients, and if you miss
one and run it against your non-standard MIDAS, kaboom everything will go,
there is no safety net against this.

In the ALPHA experiment at CERN, for years we have been running with MAX_OPEN_RECORDS set to 2560,
and it works, you have to change both MAX_OPEN_RECORDS in midas.h and the expected values
in the assert() statements.

The new correct values you do not need to guess or compute yourself, the code to print
them is right there and it is easy to enable.

Replacing the numeric constants with computed values of course would completely defeat
the purpose of the tests - to catch the situation where by mistake or by ignorance
(or by miscompilation) sizes of critical data structures become different from those
normally expected.

K.O.
    Reply  27 Apr 2026, Konstantin Olchanski, Bug Report, increasing the max number of hot links in ODB 
> Indeed, updating MIDAS clients on each and every RPI etc in a running experiment may be a real challenge.

actually, only local clients must be rebuilt, remote clients connecting to the mserver do not care about ODB 
internal structure.

> Thinking forward - would it help if the ODB clients, upon initial connection but before doing anything else 
> were reading the ODB parameters from the ODB itself, so the clients were "learning" about the ODB structure 
> dynamically, at run time? Or that knowledge has to be static ?

unfortunately, the "open records" structure is allocated at compile-time inside the ODB header,
making any change to this would break binary compatibility.

I think it is possible to allocate "space for additional open records" in the ODB data area
and have the ODB open records code use it in addition to the compile-time allocated
space in the database header. This would also work for extending MAX_CLIENTS.

Of course in this approach, old midas clients would see only the clients and open records
in the database header, new midas clients would see the additional data.

It is not super hard to add this code...

K.O.
    Reply  27 Apr 2026, Konstantin Olchanski, Bug Report, increasing the max number of hot links in ODB 
> I wonder why one needs more than 256 hotlinks at all.

I confirm that ALPHA is running with MAX_OPEN_RECORDS changed from 256 to 2048,
this is the only experiment I know of that had to increase any MIDAS ODB defaults.

The reason for this is mlogger, it opens an open record for each variable in each equipment.

This should be changed to 1 db_watch per equipment. We talked about it, but I guess we never did it.

I think this task just went almost to the top of my MIDAS to-do list.

K.O.
    Reply  14 Feb 2020, Konrad Briggl, Forum, Writting Midas Events via FPGAs 
Hello Stefan,
is there a difference for the later data processing (after writing the ring buffer blocks)
if we write single events or multiple in one rb_get_wp - memcopy - rb_increment_wp cycle?
Both Marius and me have seen some inconsistencies in the number of events produced that is reported in the status page when writing multiple events in one go,
so I was wondering if this is due to us treating the buffer badly or the way midas handles the events after that.

Given that we produce the full event in our (FPGA) domain, an option would be to always copy one event from the dma to the midas-system buffer in a loop.
The question is if there is a difference (for midas) between
[pseudo code, much simplified]

while(dma_read_index < last_dma_write_index){
  if(rb_get_wp(pdata)!=SUCCESS){
    dma_read_index+=event_size;
    continue;   
  }
  copy_n(dma_buffer, pdata, event_size);
  rb_increment_wp(event_size);
  dma_read_index+=event_size;
} 

and

while(dma_read_index < last_dma_write_index){
  if(rb_get_wp(pdata)!=SUCCESS){
     ...
  };
  total_size=max_n_events_that_fit_in_rb_block();
  copy_n(dma_buffer, pdata, total_size);
  rb_increment_wp(total_size);
  dma_read_index+=total_size;
}

Cheers,
Konrad

> The rb_xxx function are (thoroughly tested!) robust against high data rate given that you use them as intended:
> 
> 1) Once you create the ring buffer via rb_create(), specify the maximum event size (overall event size, not bank size!). Later there is no protection any more, so if you obtain pdata from rb_get_wp, you can of course write 4GB to pdata, overwriting everything in your memory, causing a total crash. It's your responsibility to not write more bytes into pdata then 
> what you specified as max event size in rb_create()
> 
> 2) Once you obtain a write pointer to the ring buffer via rb_get_wp, this function might fail when the receiving side reads data slower than the producing side, simply because the buffer is full. In that case the producing side has to wait until space is freed up in the buffer by the receiving side. If your call to rb_get_wp returns DB_TIMEOUT, it means that the 
> function did not obtain enough free space for the next event. In that case you have to wait (like ss_sleep(10)) and try again, until you succeed. Only when rb_get_wp() returns DB_SUCCESS, you are allowed to write into pdata, up to the maximum event size specified in rb_create of course. I don't see this behaviour in your code. You would need something 
> like
> 
> do {
>    status = rb_get_wp(rbh, (void **)&pdata, 10);
>    if (status == DB_TIMEOUT)
>       ss_sleep(10);
>    } while (status == DB_TIMEOUT);
> 
> Best,
> Stefan
> 
> 
> > Dear all,
> > 
> > we creating Midas events directly inside a FPGA and send them off via DMA into the PC RAM. For reading out this RAM via Midas the FPGA sends as a pointer where it has written the last 4kB of data. We use this pointer for telling the ring buffer of midas where the new events are. The buffer looks something like:
> > 
> > // event 1
> > dma_buf[0] = 0x00000001; // Trigger and Event ID
> > dma_buf[1] = 0x00000001; // Serial number
> > dma_buf[2] = TIME; // time
> > dma_buf[3] = 18*4-4*4; // event size
> > dma_buf[4] = 18*4-6*4; // all bank size
> > dma_buf[5] = 0x11; // flags
> > // bank 0
> > dma_buf[6] = 0x46454230; // bank name
> > dma_buf[7] = 0x6; // bank type TID_DWORD
> > dma_buf[8] = 0x3*4; // data size
> > dma_buf[9] = 0xAFFEAFFE; // data
> > dma_buf[10] = 0xAFFEAFFE; // data
> > dma_buf[11] = 0xAFFEAFFE; // data
> > // bank 1
> > dma_buf[12] = 0x1; // bank name
> > dma_buf[12] = 0x46454231; // bank name
> > dma_buf[13] = 0x6; // bank type TID_DWORD
> > dma_buf[14] = 0x3*4; // data size
> > dma_buf[15] = 0xAFFEAFFE; // data
> > dma_buf[16] = 0xAFFEAFFE; // data
> > dma_buf[17] = 0xAFFEAFFE; // data
> > 
> > // event 2
> > .....
> > 
> > dma_buf[fpga_pointer] = 0xXXXXXXXX;
> > 
> > 
> > And we do something like:
> > 
> > while{true}
> >    // obtain buffer space
> >    status = rb_get_wp(rbh, (void **)&pdata, 10);
> >    fpga_pointer = fpga.read_last_data_add();
> > 
> >    wlen = last_fpga_pointer - fpga_pointer; \\ in 32 bit words
> >    copy_n(&dma_buf[last_fpga_pointer], wlen, pdata);
> >    rb_status = rb_increment_wp(rbh, wlen * 4); \\ in byte
> > 
> >    last_fpga_pointer = fpga_pointer;
> > 
> > Leaving the case out where the dma_buf wrap around this works fine for a small data rate. But if we increase the rate the fpga_pointer also increases really fast and wlen gets quite big. Actually it gets bigger then max_event_size which is checked in rb_increment_wp leading to an error. 
> > 
> > The problem now is that the event size is actually not to big but since we have multi events in the buffer which are read by midas in one step. So we think in this case the function rb_increment_wp is comparing actually the wrong thing. Also increasing the max_event_size does not help.
> > 
> > Remark: dma_buf is volatile so memcpy is not possible here.
> > 
> > Cheers,
> > Marius
    Reply  02 Mar 2007, Kevin Lynch, Forum, event builder scalability 
> Hi there:
> I have a question if there's anybody out there running MIDAS with event builder
> that assembles events from more that just a few front ends (say on the order of
> 0x10 or more)?
> Any experiences with scalability?
> 
> Cheers
>  Piotr

Mulan (which you hopefully remember with great fondness :-) is currently running
around ten frontends, six of which produce data at any rate.  If I'm remembering
correctly, the event builder handles about 30-40MB/s.  You could probably ping Tim
Gorringe or his current postdoc Volodya Tishenko (tishenko@pa.uky.edu) if you want
more details.  Volodya solved a significant number of throughput related
bottlenecks in the year leading up to our 2006 run.
ELOG V3.1.4-2e1708b5