Back Midas Rome Roody Rootana
  Midas DAQ System, Page 38 of 150  Not logged in ELOG logo
ID Date Author Topic Subjectdown
  1134   11 Nov 2015 Konstantin OlchanskiInfomerged: midas JSON-RPC interface
The JSON RPC branch has been merged into main MIDAS. Other than adding new functions, there are no changes to existing MIDAS functionality.

This is the current JSON RPC schema: (from the MIDAS Help page)

------------------------------------------------------------------------
Autogenerated schema for all MIDAS JSON-RPC methods
------------------------------------------------------------------------
cm_exist?      | calls MIDAS cm_exist() to check if given MIDAS program is running
               | -------------------------------------------------------
               | params   | name           | string         | name of the program, corresponding to ODB /Programs/name
               |          | unique?        | bool           | bUnique argument to cm_exist()
               | -------------------------------------------------------
               | result   | status         | integer        | return status of cm_exist()
------------------------------------------------------------------------
cm_shutdown?   | calls MIDAS cm_shutdown() to stop given MIDAS program
               | -------------------------------------------------------
               | params   | name           | string         | name of the program, corresponding to ODB /Programs/name
               |          | unique?        | bool           | bUnique argument to cm_shutdown()
               | -------------------------------------------------------
               | result   | status         | integer        | return status of cm_shutdown()
------------------------------------------------------------------------
db_copy?       | get copies of given ODB subtrees in the "save" json encoding
               | -------------------------------------------------------
               | params   | paths[]        | array of ODB subtree paths, see note on array indices
               |          |                | array of       | string 
               | -------------------------------------------------------
               | result   | data[]         | copy of ODB data for each path
               |          |                | array of       | object 
               |          | status[]       | return status of db_copy_json() for each path
               |          |                | array of       | integer
               |          | last_written[] | last_written value of the ODB subtree for each path
               |          |                | array of       | number 
------------------------------------------------------------------------
db_create?     | get copies of given ODB subtrees in the "save" json encoding
               | -------------------------------------------------------
               | params[] | array of ODB paths to be created
               |          | array of       | arguments to db_create() and db_resize()
               |          |                | path           | string  | ODB path
               |          |                | type           | integer | MIDAS TID_xxx type
               |          |                | array_length?  | integer | optional array length, default is 1
               |          |                | string_length? | integer | for TID_STRING, optional string length, default is NAME_LENGTH
               | -------------------------------------------------------
               | result   | status[]       | return status of db_create() for each path
               |          |                | array of       | integer
------------------------------------------------------------------------
db_get_values? | get values of ODB data from given subtrees
               | -------------------------------------------------------
               | params   | paths[]        | array of ODB subtree paths, see note on array indices
               |          |                | array of       | string 
               | -------------------------------------------------------
               | result   | data[]         | values of ODB data for each path, all key names are in lower case, all symlinks are followed
               |          |                | array of       | any    
               |          | status[]       | return status of db_copy_json() for each path
               |          |                | array of       | integer
               |          | last_written[] | last_written value of the ODB subtree for each path
               |          |                | array of       | number 
------------------------------------------------------------------------
db_paste?      | write data into ODB
               | -------------------------------------------------------
               | params   | paths[]        | array of ODB subtree paths, see note on array indices
               |          |                | array of       | string 
               |          | values[]       | data to be written using db_paste_json()
               |          |                | array of       | any    
               | -------------------------------------------------------
               | result   | status[]       | return status of db_paste_json() for each path
               |          |                | array of       | integer
------------------------------------------------------------------------
get_debug?     | get current value of mjsonrpc_debug
               | -------------------------------------------------------
               | params   | any            | there are no input parameters
               | -------------------------------------------------------
               | result   | integer        | current value of mjsonrpc_debug
------------------------------------------------------------------------
get_schema?    | Get the MIDAS JSON-RPC schema JSON object
               | -------------------------------------------------------
               | params   | any            | there are no input parameters
               | -------------------------------------------------------
               | result   | object         | returns the MIDAS JSON-RPC schema JSON object
------------------------------------------------------------------------
null?          | RPC method always returns null
               | -------------------------------------------------------
               | params   | any            | method parameters are ignored
               | -------------------------------------------------------
               | result   | null           | always returns null
------------------------------------------------------------------------
set_debug?     | set new value of mjsonrpc_debug
               | -------------------------------------------------------
               | params   | integer        | new value of mjsonrpc_debug
               | -------------------------------------------------------
               | result   | integer        | new value of mjsonrpc_debug
------------------------------------------------------------------------
start_program? | start MIDAS program defined in ODB /Programs/name
               | -------------------------------------------------------
               | params   | name           | string         | name of the program, corresponding to ODB /Programs/name
               | -------------------------------------------------------
               | result   | status         | integer        | return status of ss_system()
------------------------------------------------------------------------
user_example1? | example of user defined RPC method that returns up to 3 results
               | -------------------------------------------------------
               | params   | arg            | string         | example string argment
               |          | optional_arg?  | integer        | optional example integer argument
               | -------------------------------------------------------
               | result   | string         | string         | returns the value of "arg" parameter
               |          | integer        | integer        | returns the value of "optional_arg" parameter
------------------------------------------------------------------------
user_example2? | example of user defined RPC method that returns more than 3 results
               | -------------------------------------------------------
               | params   | arg            | string         | example string argment
               |          | optional_arg?  | integer        | optional example integer argument
               | -------------------------------------------------------
               | result   | string1        | string         | returns the value of "arg" parameter
               |          | string2        | string         | returns "hello"
               |          | string3        | string         | returns "world!"
               |          | value1         | integer        | returns the value of "optional_arg" parameter
               |          | value2         | number         | returns 3.14
------------------------------------------------------------------------
user_example3? | example of user defined RPC method that returns an error
               | -------------------------------------------------------
               | params   | arg            | integer        | integer value, if zero, throws a JSON-RPC error
               | -------------------------------------------------------
               | result   | status         | integer        | returns the value of "arg" parameter

K.O.
  677   26 Nov 2009 Konstantin OlchanskiBug Fixmdump max number of banks and dump of 32-bit banks
By request from Renee, I increased the MIDAS BANKLIST_MAX from 64 to 1024 and
after fixing a few buglets where YB_BANKLIST_MAX is used instead of (now bigger)
BANKLIST_MAX, I can do a full dump of ND280 FGD events (96 banks).

I also noticed that "mdump -b BANK" did not work, it turns out that it could not
handle 32bit-banks at all. This is now fixed, too.

svn rev 4624
K.O.
  2377   29 Mar 2022 Konstantin OlchanskiBug Fixmdump can read lz4 and bz2 files now
I converted mdump file i/o from older mdsupport library to newer midasio library 
and it can now read .mid, .mid.gz, .mid.lz4 and .mid.bz2 files. Output should be 
identical to what it printed before, if you see any differences, please report 
them here or on bitbucket. K.O.
  1858   17 Mar 2020 Konstantin OlchanskiInfombedtls, mhttpd mongoose 6.16 update
> > > the update of mhttpd to mongoose version 6.16 was committed to the develop branch of midas.

current code looks for the mbedtls library in ../mbedtls (next to midas)

if cmake misdetects it, turn it off by setting NO_MBEDTLS (same as NO_ROOT & co)

if you do want to build mhttpd with mbedtls, do this:

cd .../midas
cd ../
git clone https://github.com/ARMmbed/mbedtls.git
cd mbedtls
git submodule update --init ### this will populate the "crypto" directory
make ### if "python2" is missing, building of test suite programs will fail, but the libraries needed for midas will be built

cd ../midas
make cmake...

K.O.
  1870   30 Mar 2020 Stefan RittInfombedtls, mhttpd mongoose 6.16 update
I had some quick look at the new mongoose code and didn't find anything I dislike. Did a quick test of the proxy which worked and is nice to have. 
Agree with all KO said about authentication.

So if there are no complaints, I would suggest that we move the summary of this thread into the official documentation.

Stefan
  2351   03 Mar 2022 Konstantin OlchanskiInfomanalyzer updated
manalyzer was updated to latest version. mostly multi-threading improvements from 
Joseph and myself. K.O.
  2783   22 Jun 2024 Joseph McKennaSuggestionmanalyzer thread safety and custom http IP binding
Hi all, I hope this is the right place to post two pull requests, if not, please let me know where I should be submitting them

Both are fairly small changes, please see them listed below (more details written on the PRs themselves)


- Enable ROOT's thread safety when running in multithreaded mode

This helps avoid users having to write their call to a global thread lock when calling ->Fill() on ROOT histograms and Trees
https://bitbucket.org/tmidas/manalyzer/pull-requests/5


- Add command argument to specify an IP of the root HTTP server to bind to

This was a problem I painted around when at ALPHA (quickly hardcoding the right external IP address into the local build. Obviously a bad habit)
https://bitbucket.org/tmidas/manalyzer/pull-requests/6
  2843   13 Sep 2024 Konstantin OlchanskiSuggestionmanalyzer thread safety and custom http IP binding
> - Enable ROOT's thread safety when running in multithreaded mode
> This helps avoid users having to write their call to a global thread lock when calling ->Fill() on ROOT histograms and Trees
> https://bitbucket.org/tmidas/manalyzer/pull-requests/5

merged by hand. (pull request shows a "rejected", bitbucket has no "merged manually" button).

also noted this change in the documentation: README.md

K.O.
  2977   20 Mar 2025 Konstantin OlchanskiBug Reportmanalyzer module init order problem
Andrea Capra reported a problem with manalyzer module initialization order.

Original manalyzer design relies on the fact than module static constructors run in the same order as they 
appear on the linker command line. This has been true for a very long time, but now we have evidence that on 
Rocky Linux (unknown gcc version), this is no longer true.

The c++ standard does not define any specific order for static constructor in different source files. 
(unlike C, C++ tends to treat the linker as something magical and ask the linker to do magical things).

The tmfe c++ modular frontend was designed after manalyzer and one design change is to explicitly construct 
all objects from main(). (at the acceptable cost of extra boiler plate code).

One solution is to use GCC attribute "init_priority (priority)", see
https://stackoverflow.com/questions/211237/static-variables-initialisation-order
https://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Attributes.html

To use this, manalyzer module registration should be modified to read:

static TARegister tar __attribute__((init_priority(200))) (new ExampleCxxFactory);

A less magical solution is to change manalyzer design similar to tmfe where modules are constructed and 
registered explicitely in the order specified by source code.

K.O.

P.S. I now ran out of time to test this and commit it to the documentation. It also need to be tested with 
LLVM C++ compilers.
  2999   25 Mar 2025 Konstantin OlchanskiBug Reportmanalyzer module init order problem
> Andrea Capra reported a problem with manalyzer module initialization order.

Permanent solution is now implemented.

In the analyzer module constructor (TARunObject), please set the value of fModuleOrder (default value is 0).

Modules with smaller fModuleOrder will run first, modules with bigger fModuleOrder will run last.

Please use the value range between -1 (built-in EventDump module) and 9999 (built-in InteractiveModule).

Modules with equal fModuleOrder will run in the same order as they were registered (std::stable_sort).

Example manalyzer modules and documentation README.md have been updated.

Linker command line and GCC attribute methods should also work, but I thought it better to provide explicit 
programmatic control of module ordering.

P.S. the c++ tmfe frontend module design is newer and the problem of module (equipment) ordering is solved by 
constructing them explicitly in main().

manalyzer commit:

commit 6ca130808cd05ead734450391bf6defc8335c04a (HEAD -> master, origin/master, origin/HEAD)
Author: Konstantin Olchanski <olchansk@daq00.triumf.ca>
Date:   Tue Mar 25 15:53:13 2025 -0700

    implement TARunObject::fModuleOrder to specify required ordering of analyzer modules

K.O.
  3011   30 Mar 2025 Konstantin OlchanskiBug Fixmanalyzer improvements
updated manalyzer:

- similar to --jsroot switch, in online mode, the ROOT output file remains open after run is stopped. Previously, after run was 
stopped, all histograms & etc would disappear from JSROOT, making it hard to look at the full collected and analyzed data.

- there was a buglet in the multithreading code, if some module cannot analyze flow events as fast as we can read data from disk, 
the flow event queue of the first module thread would grow and grow and grow infinitely, potentially consume lots of RAM. This is 
because control of queue size for the first module thread was disabled to avoid a deadlock. I now added the queue size check to the 
main event loop (both offline mode and online mode) and this problem should now be fixed.

- also adjusted the default queue size from 100 to 1000 and queue-full wait sleep time from 100 us to 10 us.

- another buglet was in the flow event processing. per the README, module EndRun() should not generate flow events (instead, they 
should be generated in PreEndRun()). Previously this was not enforced, now there is an error message about this and the offending 
flow events are deleted. (they were not being processed anyway).

K.O.
  3010   28 Mar 2025 Konstantin OlchanskiBug Fixmanalyzer -R8082 --jsroot
When processing MIDAS files offline, JSROOT did not work, -Rxxx worked, http 
connection would open, but would not serve any histograms. This should now be 
fixed.

In addition, normally, after processing all input MIDAS files, manalyzer would 
exit, JSROOT would abruptly stop. To look at final results one had to open the 
ROOT files using some other method (roody, TBrowser, mjsroot, etc).

I now added a command line switch "--jsroot", if supplied, after processing all 
input MIDAS files, manalyzer will keep running in the JSROOT server mode (same as 
mjsroot).

"manalyzer -R8082 --jsroot run*.mid.lz4" now does something useful: open 
http://localhost:8082 (or ssh tunnel or mhttpd proxy per my mjsroot message) and 
watch histograms fill in real time, after analysis finishes, keep looking at the 
final results until bored. stop manalyzer using Ctrl-C. (we should add a "Stop 
JSROOT" botton to the JSROOT main page).

MIDAS commit 1d0d6448c3ec4ffd225b8d2030fe13e379fcd007

K.O.
  2803   08 Aug 2024 Stefan RittInfomana.cxx
We are considering to remove the analyzer framework mana.cxx from MIDAS. It 
currently has some compiler warnings and we wonder if we should fix them which 
would take some time or just remove the file. We have now to much more modern 
analyzer frameworks "manalyzer" and "ROOTANA" which should be used instead.

Is anybody still using the mana.cxx framework?

/Stefan
  2810   23 Aug 2024 Stefan RittInfomana.cxx
Ok, no relevant complains so far, so I removed mana and rmana from the CMake build 
process, but left the file mana.cxx still in the repository for educational 
purposes ;-)

Stefan
  2837   11 Sep 2024 Konstantin OlchanskiInfomana.cxx
> Ok, no relevant complains so far, so I removed mana and rmana from the CMake build 
> process, but left the file mana.cxx still in the repository for educational 
> purposes ;-)

+1

K.O.
  114   31 Oct 2003 Konstantin Olchanski mana.c without ROOT and HBOOK
Stephan, why did you prohibit building mana.c without ROOT and HBOOK
support? I think such a configuration is valid and should be allowed.

Also, this prohibition broke the Midas Makefile, it now bombs building
mana.c. The Makefile is setup for building hmana.c with HBOOK support,
rmana.c with ROOT support (if ROOTSYS is set) and mana.c without HBOOK and
ROOT support (currently bombs on #error in mana.c).

K.O.
  115   01 Nov 2003 Stefan Ritt mana.c without ROOT and HBOOK
> Stephan, why did you prohibit building mana.c without ROOT and HBOOK
> support? I think such a configuration is valid and should be allowed.

Oops, sorry, my fault. I forgto that people use mana.c without ROOT and 
HBOOK. The reason I made the change was that people forgot the -DHVAE_HBOOK 
in their makefile. In that case, no HBOOK init is done in mana.c and the 
first histogram booking in the user code crashes HBOOK.

So please take the #error statement out of mana.c (I'm away in two hours for 
one week), but think about preventing the above mentionend problem. I don't 
know any way for the makefile or mana.c to figure out if there is any HF1 
call in the user code. Actually HF1 should return a "proper" error message 
than just crashing.

One possibility is that we put an additional layer on top of the histogram 
boooking/filling. These macros are converted to their HBOOK or ROOT 
equivalents depending on the HAVE_HBOOK/HAVE_ROOT. If none of both is 
present, the histogram booking macro can produce a runtime error. This has 
the additional advantage that users can switch from HBOOK to ROOT without 
change of their user code.
  116   01 Nov 2003 Konstantin Olchanski mana.c without ROOT and HBOOK
> > Stephan, why did you prohibit building mana.c without ROOT and HBOOK
> > support? I think such a configuration is valid and should be allowed.
> 
> Oops, sorry, my fault. I forgto that people use mana.c without ROOT and 
> HBOOK. The reason I made the change was that people forgot the -DHVAE_HBOOK 
> in their makefile. In that case, no HBOOK init is done in mana.c and the 
> first histogram booking in the user code crashes HBOOK.

Ahem. There is only so much rope we can give out to prevent people from shooting
themselves in the foot...

> So please take the #error statement out of mana.c

Done.

> One possibility is that we put an additional layer on top of the histogram 
> boooking/filling. These macros are converted to their HBOOK or ROOT 
> equivalents depending on the HAVE_HBOOK/HAVE_ROOT. If none of both is 
> present, the histogram booking macro can produce a runtime error. This has 
> the additional advantage that users can switch from HBOOK to ROOT without 
> change of their user code.

I can't think of anything other than wrapping every HBOOK call with "if
(!hbook_is_initialized) initialize_hbook();". But then, where is PAWC
coming from anyway?!?

We could also print a warning message "This mana.c has no HBOOK support. If you
see HBOOK crashes, please relink with hmana,c". Ugly, but informative, plus it
points anybody who knows how to read towards a solution.

K.O.
  1577   27 Jun 2019 Konstantin OlchanskiBug Reportmake linux32 bombs on el7 in crc32c.c, ERROR INSTALLING 32BIT MIDAS LIBRARIES ON 64BIT HOST MACHINE
Reproduced on el7 (CentOS7). Same thing works on el6 (SL6).

The error is in the SSE4.2-assembly-accelerated library for computing crc32c checksums. I do 
not understand this assembly stuff enough to tell what goes wrong.

In any case, "make linux32" is intended for VME processors that cannot run 64-bit code. These 
processors also happen to not have the SSE4.2 instructions needed for this code to actually 
work, so one solution would be to always disable crc32c SSE4.2-assembly-acceleration for the 
linux32 target.

Note that the original reported was running "make linux32" with the idea of generating code for 
the 32-bit ARM processor.

Here the situation is like this: the required CRC32C instructions are present on 64-bit capable 
ARM processors (RPi3, etc) and probably work in 32-bit mode, and I found the assembly-
language crc32c library that uses them. It needs to be added to MIDAS and tested.

For the older 32-bit-only ARM processors (Cyclone5 FPGA, Rpi2 and older) I do not see any 
hardware accelerated crc32c implementations, so it uses software computed CRC32 always. 
(P.S.: I see the current linux kernels have a hardware accelerated library for CRC32C, not sure if 
it can be imported into MIDAS easily).

K.O.


> $ make linux32 ...
> g++ -m32 -c -g -O2 -Wall -Wno-strict-aliasing -Wuninitialized -Iinclude -
> Idrivers -Imxml -Imscb/include -DHAVE_FTPLIB -D_LARGEFILE64_SOURCE -DHAVE_ZLIB -
> DHAVE_MSCB -DHAVE_MONGOOSE6 -DMG_ENABLE_THREADS -DMG_DISABLE_CGI -
DMG_ENABLE_SSL 
> -DOS_LINUX -fPIC -Wno-unused-function -o lib/crc32c.o src/crc32c.cxx
> src/crc32c.cxx: In function ‘uint32_t crc32c_hw(uint32_t, const void*, size_t)’:
> src/crc32c.cxx:283:66: error: ‘asm’ operand has impossible constraints
>                      : "r"(next), "0"(crc0), "1"(crc1), "2"(crc2));
>                                                                   ^
> src/crc32c.cxx:303:66: error: ‘asm’ operand has impossible constraints
>                      : "r"(next), "0"(crc0), "1"(crc1), "2"(crc2));
>                                                                   ^
> src/crc32c.cxx: In function ‘uint32_t crc32c(uint32_t, const void*, size_t)’:
> src/crc32c.cxx:348:34: error: PIC register clobbered by ‘%ebx’ in ‘asm’
>                  : "%ebx", "%edx"); \
>                                   ^
> src/crc32c.cxx:362:5: note: in expansion of macro ‘SSE42’
>      SSE42(sse42);
>      ^
> make[1]: *** [lib/crc32c.o] Error 1
> make[1]: Leaving directory `/home/hh19285/packages/midas'
> make: *** [linux32] Error 2
> 
> Could you please help with getting past this? otherwise we may need to change 
> our whole experimental setup.
> 
> Thank you in advance
  1580   28 Jun 2019 Konstantin OlchanskiBug Reportmake linux32 bombs on el7 in crc32c.c, ERROR INSTALLING 32BIT MIDAS LIBRARIES ON 64BIT HOST MACHINE
> Reproduced on el7 (CentOS7). Same thing works on el6 (SL6).

Fixed in commit dd937e6. Only enable SSE4.2 crc32c for 64-bit compilation. Still not sure why it worked for 32-bit 
compilation on el6 (SL6).

K.O.


> 
> The error is in the SSE4.2-assembly-accelerated library for computing crc32c checksums. I do 
> not understand this assembly stuff enough to tell what goes wrong.
> 
> In any case, "make linux32" is intended for VME processors that cannot run 64-bit code. These 
> processors also happen to not have the SSE4.2 instructions needed for this code to actually 
> work, so one solution would be to always disable crc32c SSE4.2-assembly-acceleration for the 
> linux32 target.
> 
> Note that the original reported was running "make linux32" with the idea of generating code for 
> the 32-bit ARM processor.
> 
> Here the situation is like this: the required CRC32C instructions are present on 64-bit capable 
> ARM processors (RPi3, etc) and probably work in 32-bit mode, and I found the assembly-
> language crc32c library that uses them. It needs to be added to MIDAS and tested.
> 
> For the older 32-bit-only ARM processors (Cyclone5 FPGA, Rpi2 and older) I do not see any 
> hardware accelerated crc32c implementations, so it uses software computed CRC32 always. 
> (P.S.: I see the current linux kernels have a hardware accelerated library for CRC32C, not sure if 
> it can be imported into MIDAS easily).
> 
> K.O.
> 
> 
> > $ make linux32 ...
> > g++ -m32 -c -g -O2 -Wall -Wno-strict-aliasing -Wuninitialized -Iinclude -
> > Idrivers -Imxml -Imscb/include -DHAVE_FTPLIB -D_LARGEFILE64_SOURCE -DHAVE_ZLIB -
> > DHAVE_MSCB -DHAVE_MONGOOSE6 -DMG_ENABLE_THREADS -DMG_DISABLE_CGI -
> DMG_ENABLE_SSL 
> > -DOS_LINUX -fPIC -Wno-unused-function -o lib/crc32c.o src/crc32c.cxx
> > src/crc32c.cxx: In function ‘uint32_t crc32c_hw(uint32_t, const void*, size_t)’:
> > src/crc32c.cxx:283:66: error: ‘asm’ operand has impossible constraints
> >                      : "r"(next), "0"(crc0), "1"(crc1), "2"(crc2));
> >                                                                   ^
> > src/crc32c.cxx:303:66: error: ‘asm’ operand has impossible constraints
> >                      : "r"(next), "0"(crc0), "1"(crc1), "2"(crc2));
> >                                                                   ^
> > src/crc32c.cxx: In function ‘uint32_t crc32c(uint32_t, const void*, size_t)’:
> > src/crc32c.cxx:348:34: error: PIC register clobbered by ‘%ebx’ in ‘asm’
> >                  : "%ebx", "%edx"); \
> >                                   ^
> > src/crc32c.cxx:362:5: note: in expansion of macro ‘SSE42’
> >      SSE42(sse42);
> >      ^
> > make[1]: *** [lib/crc32c.o] Error 1
> > make[1]: Leaving directory `/home/hh19285/packages/midas'
> > make: *** [linux32] Error 2
> > 
> > Could you please help with getting past this? otherwise we may need to change 
> > our whole experimental setup.
> > 
> > Thank you in advance
ELOG V3.1.4-2e1708b5