03 Dec 2025, Konstantin Olchanski, Bug Fix, no more breakage in history display when panning
|
In the DL experiment (unknown version of midas, likely mid-summer 2025), we see artefacts in the
history display where pieces of the data seem to be missing, there is gaps in the graphs. reloading the
page restores correct display confirming that in fact there is no gaps in the data. This made history
plots very painful to use.
This problem does not exist anymore in the latest midas, most likely it was fixed around September 4,
2025. Most likely it was broken since at least February 2025 (previous changes to this file).
If you see this problem, updating mhistory.js to latest version is probably enough to fix it.
K.O. |
03 Dec 2025, Konstantin Olchanski, Suggestion, Improve process for adding new variables that can be shown in history plots
|
> 3b) mlogger used to rescan ODB each time a new run is started, this code was removed
One more kink turned out.
One of the computers ran out of disk space, mlogger dutifully recorded the "disk full" errors to midas.log and
disabling writing to history (all history variables).
This was only noticed about one week later (it is not a busy computer).
In the past, when mlogger reopened the history at each begin of run, the "disk full" errors would have shown
up in midas.log and somebody would have noticed. Or the problem would have gone away if disk space was cleared
up.
Now, mlogger just silently continues not writing to history. There is no ongoing error message, there is no
ongoing alarm, only sign of trouble is empty history plots and history files not growing.
Perhaps we should add an mlogger action to ask the history, "are you ok?" and report in midas.log or alarm if
history is not happy.
Or have mlogger at the begin of run automatically reenable all disabled history variables. If these variables
are still unhappy (error writing to disk or to mysql), there will be an error message in midas.log (and
automatic self-disable).
All these solutions should be okey as long as they do not touch disk storage and so do not cause any long
delay in run start.
K.O. |
05 Dec 2025, Konstantin Olchanski, Info, MIDAS RPC add support for std::string and std::vector<char>
|
> > This is moving slowly. I now have RPC caller side support for std::string and
> > std::vector<char>. RPC server side is next. K.O.
> The RPC_CXX code is now merged into MIDAS branch feature/rpc_call_cxx.
> This code fully supports passing std::string and std::vector<char> through the MIDAS RPC is both directions.
The RPC_CXX in now merged into MIDAS develop. commit 34cd969fbbfecc82c290e6c2dfc7c6d53b6e0121.
There is a new RPC parameter encoder and decoder. To avoid unexpected breakage, it is only used for newly added RPC_CXX
calls, but I expect to eventually switch all RPC calls to use the new encoder and decoder.
As examples of new code, see RPC_JRPC_CXX and RPC_BRPC_CXX, they return RPC data in an std::string and std::vector<char>
respectively, amount of returned data is unlimited, mjsonrpc parameter "max_reply_length" is no longer needed/used.
Also included of RPC_BM_RECEIVE_EVENT_CXX, it receives event data as an std::vector<char> and maximum event size is no
longer limited, ODB /Experiment/MAX_EVENT_SIZE is no longer needed/used. To avoid unexpected breakage, this new code is not
enabled yet.
K.O. |
05 Dec 2025, Konstantin Olchanski, Bug Fix, update of JRPC and BRPC
|
With the merge of RPC_CXX code, MIDAS RPC can now return data of arbitrary large size and I am
proceeding to update the corresponding mjsonrpc interface.
If you use JRPC and BRPC in the tmfe framework, you need to do nothing, the updated RPC handlers
are already tested and merged, the only effect is that large data returned by HandleRpc() and
HandleBinaryRpc() will no longer be truncated.
If you use your own handlers for JRPC and BRPC, please add the RPC handlers as shown at the end
of this message. There is no need to delete/remove the old RPC handlers.
To avoid unexpected breakage, the new code is not yet enabled by default, but you can start
using it immediately by replacing the mjsonrpc call:
mjsonrpc_call("jrpc", ...
with
mjsonrpc_call("jrpc_cxx", ...
ditto for "brpc", see resources/example.html for complete code.
After migration is completed, if you have some old frontends where you cannot add the new RPC
handlers, you can still call them using the "jrpc_old" and "brpc_old" mjsonrpc calls.
I will cut-over the default "jrpc" and "brpc" calls to the new RPC_CXX in about a month or so.
If you need more time, please let me know.
K.O.
Register the new RPCs:
cm_register_function(RPC_JRPC_CXX, rpc_cxx_callback);
cm_register_function(RPC_BRPC_CXX, binary_rpc_cxx_callback);
and add the handler functions: (see tmfe.cxx for full example)
static INT rpc_cxx_callback(INT index, void *prpc_param[])
{
const char* cmd = CSTRING(0);
const char* args = CSTRING(1);
std::string* pstr = CPSTDSTRING(2);
*pstr = "my return data";
return RPC_SUCCESS;
}
static INT binary_rpc_cxx_callback(INT index, void *prpc_param[])
{
const char* cmd = CSTRING(0);
const char* args = CSTRING(1);
std::vector<char>* pbuf = CPSTDVECTOR(2);
pbuf->clear();
pbuf->push_back(my return data);
return RPC_SUCCESS;
}
K.O. |
05 Dec 2025, Konstantin Olchanski, Info, address and thread sanitizers
|
I added cmake support for the thread sanitizer (address sanitizer was already
there). Use:
make cmake -j YES_THREAD_SANITIZER=1 # (or YES_ADDRESS_SANITIZER=1)
However, thread sanitizer is broken on U-24, programs refuse to start ("FATAL:
ThreadSanitizer: unexpected memory mapping") and report what looks like bogus
complaints about mutexes ("unlock of an unlocked mutex (or by a wrong thread)").
On macos, thread sanitizer does not report any errors or warnings or ...
P.S.
The Undefined Behaviour Sanitizer (UBSAN) complained about a few places where
functions could have been called with a NULL pointer arguments, I added some
assert()s to make it happy.
K.O. |
07 Dec 2025, Konstantin Olchanski, Suggestion, Get manalyzer to configure midas::odb when running offline
|
> #include "manalyzer.h"
> #include "midasio.h"
> +#include "odbxx.h"
This commit broke the standalone ("no MIDAS") build of manalyzer. Either odbxx has to be an independant package
(like mvodb) or it has to be conditioned on HAVE_MIDAS.
(this was flagged by failed bitbucket build of rootana)
K.O. |
08 Dec 2025, Konstantin Olchanski, Bug Report, odbxx memory leak with JSON ODB dump
|
I was testing odbxx with manalyzer, decided to print an odb value in every event,
and it worked fine in online mode, but bombed out when running from a data file
(JSON ODB dump). The following code has a memory leak. No idea if XML ODB dump
has the same problem.
int memory_leak()
{
midas::odb::set_odb_source(midas::odb::STRING, std::string(run.fRunInfo-
>fBorOdbDump.data(), run.fRunInfo->fBorOdbDump.size()));
while (1) {
int time = midas::odb("/Runinfo/Start time binary");
printf("time %d\n", time);
}
}
K.O. |
08 Dec 2025, Konstantin Olchanski, Suggestion, Get manalyzer to configure midas::odb when running offline
|
> > #include "manalyzer.h"
> > #include "midasio.h"
> > +#include "odbxx.h"
>
> This commit broke the standalone ("no MIDAS") build of manalyzer. Either odbxx has to be an independant package
> (like mvodb) or it has to be conditioned on HAVE_MIDAS.
>
> (this was flagged by failed bitbucket build of rootana)
Corrected. You can only use odbxx is manalyzer is built with HAVE_MIDAS. (mvodb is an independant package and is
always available, no need to pull and build the full MIDAS).
Also notice how I now initialize odbxx from fBorOdbDump and fEorOdbDump. Also tested against multithreaded access, it
works (as Stefan promised).
K.O. |
08 Dec 2025, Konstantin Olchanski, Suggestion, manalyzer root output file with custom filename including run number
|
I updated the root helper constructor to give the user more control over ROOT output file names.
You can now change it to anything you want in the module run constructor, see manalyzer_example_esoteric.cxx
Is this good enough?
struct ExampleE1: public TARunObject
{
ExampleE1(TARunInfo* runinfo)
: TARunObject(runinfo)
{
#ifdef HAVE_ROOT
if (runinfo->fRoot)
runinfo->fRoot->fOutputFileName = "my_custom_file_name.root";
#endif
}
}
K.O. |
09 Dec 2025, Konstantin Olchanski, Bug Report, manalyzer fails to compile on some systems because of missing #include <cmath>
|
> /code/midas/manalyzer/manalyzer.cxx:799:27: error: ‘pow’ was not declared in this scope
> 799 | bins[i] = TimeRange*pow(1.1,i)/pow(1.1,Nbins);
math.h added, pushed. nice catch.
implicit include of math.h came through TFile.h (ROOT v6.34.02), perhaps you have a newer ROOT
and they jiggled the include files somehow.
TFile.h -> TDirectoryFile.h -> TDirectory.h -> TNamed.h -> TString.h -> TMathBase.h -> cmath -> math.h
K.O. |
09 Dec 2025, Konstantin Olchanski, Bug Report, odbxx memory leak with JSON ODB dump
|
> Thanks for reporting this. It was caused by a
>
> MJsonNode* node = MJsonNode::Parse(str.c_str());
>
> not followed by a
>
> delete node;
>
> I added that now in odb::odb_from_json_string(). Can you try again?
>
> Stefan
Close, but no cigar, node you delete is not the node you got from Parse(), see "node = subnode;".
If I delete the node returned by Parse(), I confirm the memory leak is gone.
BTW, it looks like we are parsing the whole JSON ODB dump (200k+) on every odbxx access. Can you parse it just once?
K.O. |
11 Dec 2025, Konstantin Olchanski, Bug Report, odbxx memory leak with JSON ODB dump
|
> > BTW, it looks like we are parsing the whole JSON ODB dump (200k+) on every odbxx access. Can you parse it just once?
> You are right. I changed the code so that the dump is only parsed once. Please give it a try.
Confirmed fixed, thanks! There are 2 small changes I made in odbxx.h, please pull.
K.O. |
11 Dec 2025, Konstantin Olchanski, Bug Report, manalyzer fails to compile on some systems because of missing #include <cmath>
|
> TFile.h -> TDirectoryFile.h -> TDirectory.h -> TNamed.h -> TString.h -> TMathBase.h -> cmath -> math.h
reading ROOT release notes, 6.38 removed TMathBase.h from TString.h, with a warning "This change may cause errors during compilation of
ROOT-based code". Upright citizens, nice guys!
> Thanks. Are you happy for me to update the submodule commit in Midas to use this fix? I should have sufficient permission if you agree.
I am doing some last minute tests, will pull it into midas and rootana later today.
K.O. |
12 Dec 2025, Konstantin Olchanski, Bug Report, odbxx memory leak with JSON ODB dump
|
> > Confirmed fixed, thanks! There are 2 small changes I made in odbxx.h, please pull.
> There was one missing enable_jsroot in manalyzer, please pull yourself.
pulled, pushed to rootana, thanks for fixing it!
K.O. |
14 Feb 2020, Konrad Briggl, Forum, Writting Midas Events via FPGAs
|
Hello Stefan,
is there a difference for the later data processing (after writing the ring buffer blocks)
if we write single events or multiple in one rb_get_wp - memcopy - rb_increment_wp cycle?
Both Marius and me have seen some inconsistencies in the number of events produced that is reported in the status page when writing multiple events in one go,
so I was wondering if this is due to us treating the buffer badly or the way midas handles the events after that.
Given that we produce the full event in our (FPGA) domain, an option would be to always copy one event from the dma to the midas-system buffer in a loop.
The question is if there is a difference (for midas) between
[pseudo code, much simplified]
while(dma_read_index < last_dma_write_index){
if(rb_get_wp(pdata)!=SUCCESS){
dma_read_index+=event_size;
continue;
}
copy_n(dma_buffer, pdata, event_size);
rb_increment_wp(event_size);
dma_read_index+=event_size;
}
and
while(dma_read_index < last_dma_write_index){
if(rb_get_wp(pdata)!=SUCCESS){
...
};
total_size=max_n_events_that_fit_in_rb_block();
copy_n(dma_buffer, pdata, total_size);
rb_increment_wp(total_size);
dma_read_index+=total_size;
}
Cheers,
Konrad
> The rb_xxx function are (thoroughly tested!) robust against high data rate given that you use them as intended:
>
> 1) Once you create the ring buffer via rb_create(), specify the maximum event size (overall event size, not bank size!). Later there is no protection any more, so if you obtain pdata from rb_get_wp, you can of course write 4GB to pdata, overwriting everything in your memory, causing a total crash. It's your responsibility to not write more bytes into pdata then
> what you specified as max event size in rb_create()
>
> 2) Once you obtain a write pointer to the ring buffer via rb_get_wp, this function might fail when the receiving side reads data slower than the producing side, simply because the buffer is full. In that case the producing side has to wait until space is freed up in the buffer by the receiving side. If your call to rb_get_wp returns DB_TIMEOUT, it means that the
> function did not obtain enough free space for the next event. In that case you have to wait (like ss_sleep(10)) and try again, until you succeed. Only when rb_get_wp() returns DB_SUCCESS, you are allowed to write into pdata, up to the maximum event size specified in rb_create of course. I don't see this behaviour in your code. You would need something
> like
>
> do {
> status = rb_get_wp(rbh, (void **)&pdata, 10);
> if (status == DB_TIMEOUT)
> ss_sleep(10);
> } while (status == DB_TIMEOUT);
>
> Best,
> Stefan
>
>
> > Dear all,
> >
> > we creating Midas events directly inside a FPGA and send them off via DMA into the PC RAM. For reading out this RAM via Midas the FPGA sends as a pointer where it has written the last 4kB of data. We use this pointer for telling the ring buffer of midas where the new events are. The buffer looks something like:
> >
> > // event 1
> > dma_buf[0] = 0x00000001; // Trigger and Event ID
> > dma_buf[1] = 0x00000001; // Serial number
> > dma_buf[2] = TIME; // time
> > dma_buf[3] = 18*4-4*4; // event size
> > dma_buf[4] = 18*4-6*4; // all bank size
> > dma_buf[5] = 0x11; // flags
> > // bank 0
> > dma_buf[6] = 0x46454230; // bank name
> > dma_buf[7] = 0x6; // bank type TID_DWORD
> > dma_buf[8] = 0x3*4; // data size
> > dma_buf[9] = 0xAFFEAFFE; // data
> > dma_buf[10] = 0xAFFEAFFE; // data
> > dma_buf[11] = 0xAFFEAFFE; // data
> > // bank 1
> > dma_buf[12] = 0x1; // bank name
> > dma_buf[12] = 0x46454231; // bank name
> > dma_buf[13] = 0x6; // bank type TID_DWORD
> > dma_buf[14] = 0x3*4; // data size
> > dma_buf[15] = 0xAFFEAFFE; // data
> > dma_buf[16] = 0xAFFEAFFE; // data
> > dma_buf[17] = 0xAFFEAFFE; // data
> >
> > // event 2
> > .....
> >
> > dma_buf[fpga_pointer] = 0xXXXXXXXX;
> >
> >
> > And we do something like:
> >
> > while{true}
> > // obtain buffer space
> > status = rb_get_wp(rbh, (void **)&pdata, 10);
> > fpga_pointer = fpga.read_last_data_add();
> >
> > wlen = last_fpga_pointer - fpga_pointer; \\ in 32 bit words
> > copy_n(&dma_buf[last_fpga_pointer], wlen, pdata);
> > rb_status = rb_increment_wp(rbh, wlen * 4); \\ in byte
> >
> > last_fpga_pointer = fpga_pointer;
> >
> > Leaving the case out where the dma_buf wrap around this works fine for a small data rate. But if we increase the rate the fpga_pointer also increases really fast and wlen gets quite big. Actually it gets bigger then max_event_size which is checked in rb_increment_wp leading to an error.
> >
> > The problem now is that the event size is actually not to big but since we have multi events in the buffer which are read by midas in one step. So we think in this case the function rb_increment_wp is comparing actually the wrong thing. Also increasing the max_event_size does not help.
> >
> > Remark: dma_buf is volatile so memcpy is not possible here.
> >
> > Cheers,
> > Marius |
02 Mar 2007, Kevin Lynch, Forum, event builder scalability
|
> Hi there:
> I have a question if there's anybody out there running MIDAS with event builder
> that assembles events from more that just a few front ends (say on the order of
> 0x10 or more)?
> Any experiences with scalability?
>
> Cheers
> Piotr
Mulan (which you hopefully remember with great fondness :-) is currently running
around ten frontends, six of which produce data at any rate. If I'm remembering
correctly, the event builder handles about 30-40MB/s. You could probably ping Tim
Gorringe or his current postdoc Volodya Tishenko (tishenko@pa.uky.edu) if you want
more details. Volodya solved a significant number of throughput related
bottlenecks in the year leading up to our 2006 run. |
15 Dec 2016, Kevin Giovanetti, Bug Report, midas.h error
|
creating a frontend on MAC Sierra OSX 10
include the midas.h file and when compiling with XCode I get an error based on
this entry in the midas.h include
#if !defined(OS_IRIX) && !defined(OS_VMS) && !defined(OS_MSDOS) &&
!defined(OS_UNIX) && !defined(OS_VXWORKS) && !defined(OS_WINNT)
#error MIDAS cannot be used on this operating system
#endif
Perhaps I should not use Xcode?
Perhaps I won't need Midas.h?
The MIDAS system is running on my MAC but I need to add a very simple front end
for testing and I encounted this error. |
30 Oct 2018, Joseph McKenna, Bug Report, Side panel auto-expands when history page updates
|
One can collapse the side panel when looking at history pages with the button in
the top left, great! We want to see many pages so screen real estate is important
The issue we face is that when the page refreshes, the side panel expands. Can
we make the panel state more 'sticky'?
Many thanks
Joseph (ALPHA)
Version: 2.1
Revision: Mon Mar 19 18:15:51 2018 -0700 - midas-2017-07-c-197-g61fbcd43-dirty
on branch feature/midas-2017-10 |
31 Oct 2018, Joseph McKenna, Bug Report, Side panel auto-expands when history page updates
|
> >
> >
> > One can collapse the side panel when looking at history pages with the button in
> > the top left, great! We want to see many pages so screen real estate is important
> >
> > The issue we face is that when the page refreshes, the side panel expands. Can
> > we make the panel state more 'sticky'?
> >
> > Many thanks
> > Joseph (ALPHA)
> >
> > Version: 2.1
> > Revision: Mon Mar 19 18:15:51 2018 -0700 - midas-2017-07-c-197-g61fbcd43-dirty
> > on branch feature/midas-2017-10
>
> Hi Joseph,
>
> In principle a page refresh should now not be necessary, since pages should reload automatically
> the contents which changes. If a custom page needs a reload, it is not well designed. If necessary, I
> can explain the details.
>
> Anyhow I implemented your "stickyness" of the side panel in the last commit to the develop branch.
>
> Best regards,
> Stefan
Hi Stefan,
I apologise for miss using the word refresh. The re-appearing sidebar was also seen with the automatic
reload, I have implemented your fix here and it now works great!
Thank you very much!
Joseph |
14 Oct 2019, Joseph McKenna, Forum, tmfe.cxx - Future frontend design
|
Hi,
I have been looking at the 2019 workshop slides, I am interested in the C++ future of MIDAS.
I am quite interested in using the object oriented
ALPHA will start data taking in 2021 |
|