| ID |
Date |
Author |
Topic |
Subject |
|
3154
|
27 Nov 2025 |
Konstantin Olchanski | Info | switch midas to c++17 | >
> set(CMAKE_CXX_STANDARD 17)
> set(CMAKE_CXX_STANDARD_REQUIRED ON)
> set(CMAKE_CXX_EXTENSIONS OFF) # optional: disables GNU extensions
>
Looks like it works, I see -std=c++17 everywhere. Added same to manalyzer and mscb (mscb was still c++11).
Build on U-20 works (g++ accepts -std=c++17), build on CentOS-7 bombs, cmake 3.17.5 does not know CXX17.
K.O. |
|
3155
|
27 Nov 2025 |
Konstantin Olchanski | Forum | Control external process from inside MIDAS | > Rather than investing time to re-invent the wheel here, better try to modify your EPICS driver process to
become a midas process.
I am with Stefan on this. Quite a bit of work went into the tmfe c++ framework to make it easy/easier to do
this - take an existing standalone c/c++ program and midas-ize it: in main(), "just add" calls to connect to
midas and to start the midas threads - rpc handler, watchdog, etc.
Alternatively, one can write a midas "stdout+stderr bridge", and start your standalone program
from the programs page like this:
myprogram |& cm_msg_bridge --name "myprogram" (redirect both stdout and stderr to cm_msg_bridge stdin)
cm_msg_bridge would read stdin and put them in cm_msg(). it will connect to midas using the name "myprogram"
to make it show "green" on the status page and it will be stoppable from the programs page.
care will need to be taken for myprogram to die cleanly when stdout and stderr are closed after cm_msg_bridge
exits.
K.O. |
|
3159
|
28 Nov 2025 |
Konstantin Olchanski | Suggestion | mvodb WS and family type matching | > > 2) "advanced" c++ code:
> >
> > void foo(const std::string& xxx) { ... };
> > int main() { foo("bar"); }
> >
> > copy-created 2nd string is avoided, but string object to hold "bar" is still must be
> > made, 1 malloc(), 1 memcpy().
>
> Are you sure about this? I always thought that foo only receives a pointer to xxx which it puts on the stack, so
> no additional malloc/free is involved.
Yes, "bar" is not an std::string, cannot be used to call foo(), and the c++ compiler has to automagically rewrite
the function call
from: int main() { foo("bar"); }
to: int main() { foo(std::string("bar"); }
the temporary std::string object may be on the stack, but storage for text "bar" is on the heap (unless std::string
is optimized to store short strings internally).
one can put a printf() inside foo() to print the address of xxx (should be on the stack) and xxx.c_str() (should be
on the heap). one could also try to print the address of "bar" (should be in the read-only-constant-strings memory
area). (I am not sure if compiler-linker combines all instances of "bar" into one, this is also easy to check).
K.O. |
|
3160
|
28 Nov 2025 |
Konstantin Olchanski | Suggestion | mvodb WS and family type matching | Just in time, enter std::string_view.
https://stackoverflow.com/questions/40127965/how-exactly-is-stdstring-view-faster-than-const-stdstring
I was looking at https://root.cern/doc/v638/classROOT_1_1Experimental_1_1RFile.html and they use it everywhere instead of
std::string and const char*.
(so now we have 4 string types to deal with, counting ROOT's TString).
P.S. For extra safety, this code compiles, then explodes:
std::string_view get_temporary_string() {
std::string s = "temporary";
return s; // DANGER! 's' is destroyed, view dangles.
}
K.O. |
|
3162
|
01 Dec 2025 |
Konstantin Olchanski | Bug Fix | mvodb updated | I updated mvodb and test_mvodb. MIDAS ODB and JSON ODB now implement all API
functions. ReadKey, ReadDir and ReadKeyLastWritten were previously missing from
some implementations.
I do not remember any other bugs or problems in mvodb, if you want me to add, fix
or change something, please speak up!
K.O. |
|
3163
|
01 Dec 2025 |
Konstantin Olchanski | Info | MIDAS RPC add support for std::string and std::vector<char> | > This is moving slowly. I now have RPC caller side support for std::string and
> std::vector<char>. RPC server side is next. K.O.
The RPC_CXX code is now merged into MIDAS branch feature/rpc_call_cxx.
This code fully supports passing std::string and std::vector<char> through the MIDAS RPC is both directions.
For data passed from client to mserver, memory for string and vector data is allocated automatically as needed.
For data returned from mserver to client, memory to hold returned string and vector data is allocated automatically as
need.
This means that RPC calls can return data of arbitrary size, the rpc caller does not need to know maximum data size.
Removing this limitation was the main motivation for this development.
I completed this code in June 2024, but could not merge it because I broke my git repository (oops). Now I am doing
the merge manually. Changes are isolated to rpc_call_encode(), rpc_call_decode() and rpc_execute(). My intent right
now is to use the new RPC code only for RPCs that pass std::string and std::vector<char>, existing RPCs will use the
old code without any changes. This seems to be the safest way to move forward.
Included is test_rpc() which tests and probes most normal uses cases and some corner cases. When writing the test
code, I found a few bugs in the old MIDAS RPC code. If I remember right, I committed fixes for those bugs to main
MIDAS right then and there.
K.O. |
|
3164
|
02 Dec 2025 |
Konstantin Olchanski | Info | cm_expand_env() | Just to remember, MIDAS has cm_expand_env() to expand environment variables, in
file paths, etc. It is used in several places in mhttpd, msequencer and mjsonrpc.
std::string my_secret_file = cm_expand_env("$HOME/.ssh/authorised_keys");
One could add it everywhere we open files, etc, except for the security
consideration. We should not permit any/every web site to read any/every local
file (directly by injecting malicious js code or by cross-site mjsonrpc call).
Access should be limited to files in designated MIDAS experiment subdirectories.
Places like $HOME/.ssh, $HOME/.cache/google-chrome, etc must be protected.
K.O. |
|
3165
|
03 Dec 2025 |
Konstantin Olchanski | Bug Fix | no more breakage in history display when panning | In the DL experiment (unknown version of midas, likely mid-summer 2025), we see artefacts in the
history display where pieces of the data seem to be missing, there is gaps in the graphs. reloading the
page restores correct display confirming that in fact there is no gaps in the data. This made history
plots very painful to use.
This problem does not exist anymore in the latest midas, most likely it was fixed around September 4,
2025. Most likely it was broken since at least February 2025 (previous changes to this file).
If you see this problem, updating mhistory.js to latest version is probably enough to fix it.
K.O. |
|
3166
|
03 Dec 2025 |
Konstantin Olchanski | Suggestion | Improve process for adding new variables that can be shown in history plots | > 3b) mlogger used to rescan ODB each time a new run is started, this code was removed
One more kink turned out.
One of the computers ran out of disk space, mlogger dutifully recorded the "disk full" errors to midas.log and
disabling writing to history (all history variables).
This was only noticed about one week later (it is not a busy computer).
In the past, when mlogger reopened the history at each begin of run, the "disk full" errors would have shown
up in midas.log and somebody would have noticed. Or the problem would have gone away if disk space was cleared
up.
Now, mlogger just silently continues not writing to history. There is no ongoing error message, there is no
ongoing alarm, only sign of trouble is empty history plots and history files not growing.
Perhaps we should add an mlogger action to ask the history, "are you ok?" and report in midas.log or alarm if
history is not happy.
Or have mlogger at the begin of run automatically reenable all disabled history variables. If these variables
are still unhappy (error writing to disk or to mysql), there will be an error message in midas.log (and
automatic self-disable).
All these solutions should be okey as long as they do not touch disk storage and so do not cause any long
delay in run start.
K.O. |
|
3168
|
05 Dec 2025 |
Konstantin Olchanski | Info | MIDAS RPC add support for std::string and std::vector<char> | > > This is moving slowly. I now have RPC caller side support for std::string and
> > std::vector<char>. RPC server side is next. K.O.
> The RPC_CXX code is now merged into MIDAS branch feature/rpc_call_cxx.
> This code fully supports passing std::string and std::vector<char> through the MIDAS RPC is both directions.
The RPC_CXX in now merged into MIDAS develop. commit 34cd969fbbfecc82c290e6c2dfc7c6d53b6e0121.
There is a new RPC parameter encoder and decoder. To avoid unexpected breakage, it is only used for newly added RPC_CXX
calls, but I expect to eventually switch all RPC calls to use the new encoder and decoder.
As examples of new code, see RPC_JRPC_CXX and RPC_BRPC_CXX, they return RPC data in an std::string and std::vector<char>
respectively, amount of returned data is unlimited, mjsonrpc parameter "max_reply_length" is no longer needed/used.
Also included of RPC_BM_RECEIVE_EVENT_CXX, it receives event data as an std::vector<char> and maximum event size is no
longer limited, ODB /Experiment/MAX_EVENT_SIZE is no longer needed/used. To avoid unexpected breakage, this new code is not
enabled yet.
K.O. |
|
3169
|
05 Dec 2025 |
Konstantin Olchanski | Bug Fix | update of JRPC and BRPC | With the merge of RPC_CXX code, MIDAS RPC can now return data of arbitrary large size and I am
proceeding to update the corresponding mjsonrpc interface.
If you use JRPC and BRPC in the tmfe framework, you need to do nothing, the updated RPC handlers
are already tested and merged, the only effect is that large data returned by HandleRpc() and
HandleBinaryRpc() will no longer be truncated.
If you use your own handlers for JRPC and BRPC, please add the RPC handlers as shown at the end
of this message. There is no need to delete/remove the old RPC handlers.
To avoid unexpected breakage, the new code is not yet enabled by default, but you can start
using it immediately by replacing the mjsonrpc call:
mjsonrpc_call("jrpc", ...
with
mjsonrpc_call("jrpc_cxx", ...
ditto for "brpc", see resources/example.html for complete code.
After migration is completed, if you have some old frontends where you cannot add the new RPC
handlers, you can still call them using the "jrpc_old" and "brpc_old" mjsonrpc calls.
I will cut-over the default "jrpc" and "brpc" calls to the new RPC_CXX in about a month or so.
If you need more time, please let me know.
K.O.
Register the new RPCs:
cm_register_function(RPC_JRPC_CXX, rpc_cxx_callback);
cm_register_function(RPC_BRPC_CXX, binary_rpc_cxx_callback);
and add the handler functions: (see tmfe.cxx for full example)
static INT rpc_cxx_callback(INT index, void *prpc_param[])
{
const char* cmd = CSTRING(0);
const char* args = CSTRING(1);
std::string* pstr = CPSTDSTRING(2);
*pstr = "my return data";
return RPC_SUCCESS;
}
static INT binary_rpc_cxx_callback(INT index, void *prpc_param[])
{
const char* cmd = CSTRING(0);
const char* args = CSTRING(1);
std::vector<char>* pbuf = CPSTDVECTOR(2);
pbuf->clear();
pbuf->push_back(my return data);
return RPC_SUCCESS;
}
K.O. |
|
3170
|
05 Dec 2025 |
Konstantin Olchanski | Info | address and thread sanitizers | I added cmake support for the thread sanitizer (address sanitizer was already
there). Use:
make cmake -j YES_THREAD_SANITIZER=1 # (or YES_ADDRESS_SANITIZER=1)
However, thread sanitizer is broken on U-24, programs refuse to start ("FATAL:
ThreadSanitizer: unexpected memory mapping") and report what looks like bogus
complaints about mutexes ("unlock of an unlocked mutex (or by a wrong thread)").
On macos, thread sanitizer does not report any errors or warnings or ...
P.S.
The Undefined Behaviour Sanitizer (UBSAN) complained about a few places where
functions could have been called with a NULL pointer arguments, I added some
assert()s to make it happy.
K.O. |
|
3171
|
07 Dec 2025 |
Konstantin Olchanski | Suggestion | Get manalyzer to configure midas::odb when running offline | > #include "manalyzer.h"
> #include "midasio.h"
> +#include "odbxx.h"
This commit broke the standalone ("no MIDAS") build of manalyzer. Either odbxx has to be an independant package
(like mvodb) or it has to be conditioned on HAVE_MIDAS.
(this was flagged by failed bitbucket build of rootana)
K.O. |
|
1833
|
14 Feb 2020 |
Konrad Briggl | Forum | Writting Midas Events via FPGAs | Hello Stefan,
is there a difference for the later data processing (after writing the ring buffer blocks)
if we write single events or multiple in one rb_get_wp - memcopy - rb_increment_wp cycle?
Both Marius and me have seen some inconsistencies in the number of events produced that is reported in the status page when writing multiple events in one go,
so I was wondering if this is due to us treating the buffer badly or the way midas handles the events after that.
Given that we produce the full event in our (FPGA) domain, an option would be to always copy one event from the dma to the midas-system buffer in a loop.
The question is if there is a difference (for midas) between
[pseudo code, much simplified]
while(dma_read_index < last_dma_write_index){
if(rb_get_wp(pdata)!=SUCCESS){
dma_read_index+=event_size;
continue;
}
copy_n(dma_buffer, pdata, event_size);
rb_increment_wp(event_size);
dma_read_index+=event_size;
}
and
while(dma_read_index < last_dma_write_index){
if(rb_get_wp(pdata)!=SUCCESS){
...
};
total_size=max_n_events_that_fit_in_rb_block();
copy_n(dma_buffer, pdata, total_size);
rb_increment_wp(total_size);
dma_read_index+=total_size;
}
Cheers,
Konrad
> The rb_xxx function are (thoroughly tested!) robust against high data rate given that you use them as intended:
>
> 1) Once you create the ring buffer via rb_create(), specify the maximum event size (overall event size, not bank size!). Later there is no protection any more, so if you obtain pdata from rb_get_wp, you can of course write 4GB to pdata, overwriting everything in your memory, causing a total crash. It's your responsibility to not write more bytes into pdata then
> what you specified as max event size in rb_create()
>
> 2) Once you obtain a write pointer to the ring buffer via rb_get_wp, this function might fail when the receiving side reads data slower than the producing side, simply because the buffer is full. In that case the producing side has to wait until space is freed up in the buffer by the receiving side. If your call to rb_get_wp returns DB_TIMEOUT, it means that the
> function did not obtain enough free space for the next event. In that case you have to wait (like ss_sleep(10)) and try again, until you succeed. Only when rb_get_wp() returns DB_SUCCESS, you are allowed to write into pdata, up to the maximum event size specified in rb_create of course. I don't see this behaviour in your code. You would need something
> like
>
> do {
> status = rb_get_wp(rbh, (void **)&pdata, 10);
> if (status == DB_TIMEOUT)
> ss_sleep(10);
> } while (status == DB_TIMEOUT);
>
> Best,
> Stefan
>
>
> > Dear all,
> >
> > we creating Midas events directly inside a FPGA and send them off via DMA into the PC RAM. For reading out this RAM via Midas the FPGA sends as a pointer where it has written the last 4kB of data. We use this pointer for telling the ring buffer of midas where the new events are. The buffer looks something like:
> >
> > // event 1
> > dma_buf[0] = 0x00000001; // Trigger and Event ID
> > dma_buf[1] = 0x00000001; // Serial number
> > dma_buf[2] = TIME; // time
> > dma_buf[3] = 18*4-4*4; // event size
> > dma_buf[4] = 18*4-6*4; // all bank size
> > dma_buf[5] = 0x11; // flags
> > // bank 0
> > dma_buf[6] = 0x46454230; // bank name
> > dma_buf[7] = 0x6; // bank type TID_DWORD
> > dma_buf[8] = 0x3*4; // data size
> > dma_buf[9] = 0xAFFEAFFE; // data
> > dma_buf[10] = 0xAFFEAFFE; // data
> > dma_buf[11] = 0xAFFEAFFE; // data
> > // bank 1
> > dma_buf[12] = 0x1; // bank name
> > dma_buf[12] = 0x46454231; // bank name
> > dma_buf[13] = 0x6; // bank type TID_DWORD
> > dma_buf[14] = 0x3*4; // data size
> > dma_buf[15] = 0xAFFEAFFE; // data
> > dma_buf[16] = 0xAFFEAFFE; // data
> > dma_buf[17] = 0xAFFEAFFE; // data
> >
> > // event 2
> > .....
> >
> > dma_buf[fpga_pointer] = 0xXXXXXXXX;
> >
> >
> > And we do something like:
> >
> > while{true}
> > // obtain buffer space
> > status = rb_get_wp(rbh, (void **)&pdata, 10);
> > fpga_pointer = fpga.read_last_data_add();
> >
> > wlen = last_fpga_pointer - fpga_pointer; \\ in 32 bit words
> > copy_n(&dma_buf[last_fpga_pointer], wlen, pdata);
> > rb_status = rb_increment_wp(rbh, wlen * 4); \\ in byte
> >
> > last_fpga_pointer = fpga_pointer;
> >
> > Leaving the case out where the dma_buf wrap around this works fine for a small data rate. But if we increase the rate the fpga_pointer also increases really fast and wlen gets quite big. Actually it gets bigger then max_event_size which is checked in rb_increment_wp leading to an error.
> >
> > The problem now is that the event size is actually not to big but since we have multi events in the buffer which are read by midas in one step. So we think in this case the function rb_increment_wp is comparing actually the wrong thing. Also increasing the max_event_size does not help.
> >
> > Remark: dma_buf is volatile so memcpy is not possible here.
> >
> > Cheers,
> > Marius |
|
357
|
02 Mar 2007 |
Kevin Lynch | Forum | event builder scalability | > Hi there:
> I have a question if there's anybody out there running MIDAS with event builder
> that assembles events from more that just a few front ends (say on the order of
> 0x10 or more)?
> Any experiences with scalability?
>
> Cheers
> Piotr
Mulan (which you hopefully remember with great fondness :-) is currently running
around ten frontends, six of which produce data at any rate. If I'm remembering
correctly, the event builder handles about 30-40MB/s. You could probably ping Tim
Gorringe or his current postdoc Volodya Tishenko (tishenko@pa.uky.edu) if you want
more details. Volodya solved a significant number of throughput related
bottlenecks in the year leading up to our 2006 run. |
|
1225
|
15 Dec 2016 |
Kevin Giovanetti | Bug Report | midas.h error | creating a frontend on MAC Sierra OSX 10
include the midas.h file and when compiling with XCode I get an error based on
this entry in the midas.h include
#if !defined(OS_IRIX) && !defined(OS_VMS) && !defined(OS_MSDOS) &&
!defined(OS_UNIX) && !defined(OS_VXWORKS) && !defined(OS_WINNT)
#error MIDAS cannot be used on this operating system
#endif
Perhaps I should not use Xcode?
Perhaps I won't need Midas.h?
The MIDAS system is running on my MAC but I need to add a very simple front end
for testing and I encounted this error. |
|
1404
|
30 Oct 2018 |
Joseph McKenna | Bug Report | Side panel auto-expands when history page updates |
One can collapse the side panel when looking at history pages with the button in
the top left, great! We want to see many pages so screen real estate is important
The issue we face is that when the page refreshes, the side panel expands. Can
we make the panel state more 'sticky'?
Many thanks
Joseph (ALPHA)
Version: 2.1
Revision: Mon Mar 19 18:15:51 2018 -0700 - midas-2017-07-c-197-g61fbcd43-dirty
on branch feature/midas-2017-10 |
|
1406
|
31 Oct 2018 |
Joseph McKenna | Bug Report | Side panel auto-expands when history page updates | > >
> >
> > One can collapse the side panel when looking at history pages with the button in
> > the top left, great! We want to see many pages so screen real estate is important
> >
> > The issue we face is that when the page refreshes, the side panel expands. Can
> > we make the panel state more 'sticky'?
> >
> > Many thanks
> > Joseph (ALPHA)
> >
> > Version: 2.1
> > Revision: Mon Mar 19 18:15:51 2018 -0700 - midas-2017-07-c-197-g61fbcd43-dirty
> > on branch feature/midas-2017-10
>
> Hi Joseph,
>
> In principle a page refresh should now not be necessary, since pages should reload automatically
> the contents which changes. If a custom page needs a reload, it is not well designed. If necessary, I
> can explain the details.
>
> Anyhow I implemented your "stickyness" of the side panel in the last commit to the develop branch.
>
> Best regards,
> Stefan
Hi Stefan,
I apologise for miss using the word refresh. The re-appearing sidebar was also seen with the automatic
reload, I have implemented your fix here and it now works great!
Thank you very much!
Joseph |
|
Draft
|
14 Oct 2019 |
Joseph McKenna | Forum | tmfe.cxx - Future frontend design | Hi,
I have been looking at the 2019 workshop slides, I am interested in the C++ future of MIDAS.
I am quite interested in using the object oriented
ALPHA will start data taking in 2021 |
|
1727
|
18 Oct 2019 |
Joseph McKenna | Info | sysmon: New system monitor and performance logging frontend added to MIDAS |
I have written a system monitor tool for MIDAS, that has been merged in the develop branch today: sysmon
https://bitbucket.org/tmidas/midas/pull-requests/8/system-monitoring-a-new-frontend-to-log/diff
To use it, simply run the new program
sysmon
on any host that you want to monitor, no configuring required.
The program is a frontend for MIDAS, there is no need for configuration, as upon initialisation it builds a history display for you. Simply run one instance per machine you want to monitor. By default, it only logs once per 10 seconds.
The equipment name is derived from the hostname, so multiple instances can be run across multiple machines without conflict. A new history display will be created for each host.
sysmon uses the /proc pseudo-filesystem, so unfortunately only linux is supported. It does however work with multiple architectures, so x86 and ARM processors are supported.
If the build machine has NVIDIA drivers installed, there is an additional version of sysmon that gets built: sysmon-nvidia. This will log the GPU temperature and usage, as well as CPU, memory and swap. A host should only run either sysmon or sysmon-nvidia
elog:1727/1 shows the History Display generated by sysmon-nvidia. sysmon would only generate the first two displays (sysmon/localhost and sysmon/localhost-CPU) |
| Attachment 1: sysmon-gpu.png
|
|
|