Back Midas Rome Roody Rootana
  Midas DAQ System, Page 48 of 145  Not logged in ELOG logo
IDdown Date Author Topic Subject
  1972   10 Aug 2020 Ivo SchulthessBug Reportdata missing in runXXXXXX.mid
> with "dump" I meant a true object dump like "hexdump -C run000001.mid". I produced a file with ADC0 and TDC0 
> banks (that's the example from the distribution under exampels/experiments/frontend.cxx), and I get
> 
> ....
> 00024220  01 00 00 00 41 44 43 30  04 00 08 00 eb 06 35 04  |....ADC0......5.|
> 00024230  31 09 4f 06 54 44 43 30  04 00 08 00 93 04 fb 07  |1.O.TDC0........|
> 00024240  5c 09 88 0b 01 00 00 00  01 00 00 00 2a 0b 31 5f  |\...........*.1_|
> 00024250  28 00 00 00 20 00 00 00  01 00 00 00 41 44 43 30  |(... .......ADC0|
> 00024260  04 00 08 00 c3 09 24 05  85 05 f3 06 54 44 43 30  |......$.....TDC0|
> 00024270  04 00 08 00 88 08 2d 03  3b 0d d6 02 01 00 00 00  |......-.;.......|
> 00024280  02 00 00 00 2a 0b 31 5f  28 00 00 00 20 00 00 00  |....*.1_(... ...|
> 00024290  01 00 00 00 41 44 43 30  04 00 08 00 a5 0a 69 09  |....ADC0......i.|
> 
> where you clearly see the ADC0 and TDC0 banks.
> 
> Stefan

So at least I learned something new. I tried it with the hexdump and the banks are not existent in the .mid file. I 
only have the ODB inside the file. The 7K difference in size is actually just about what I expect to be the data 
(1792 x 4 bytes)

Best, Ivo
  1971   10 Aug 2020 Stefan RittBug Reportdata missing in runXXXXXX.mid
> So I did a quick check. The file size is about the same (322K and 329K). When I dump the .mid I don't see 
> the banks. It only prints two lines with "------ Event# 0 ------" and "------ Event# 1 ------" whereas for 
> the file with data I get the two banks with all the data. Our online analyzer also fails to see the banks. 
> Is there another way to check what is in the .mid file?

with "dump" I meant a true object dump like "hexdump -C run000001.mid". I produced a file with ADC0 and TDC0 
banks (that's the example from the distribution under exampels/experiments/frontend.cxx), and I get

....
00024220  01 00 00 00 41 44 43 30  04 00 08 00 eb 06 35 04  |....ADC0......5.|
00024230  31 09 4f 06 54 44 43 30  04 00 08 00 93 04 fb 07  |1.O.TDC0........|
00024240  5c 09 88 0b 01 00 00 00  01 00 00 00 2a 0b 31 5f  |\...........*.1_|
00024250  28 00 00 00 20 00 00 00  01 00 00 00 41 44 43 30  |(... .......ADC0|
00024260  04 00 08 00 c3 09 24 05  85 05 f3 06 54 44 43 30  |......$.....TDC0|
00024270  04 00 08 00 88 08 2d 03  3b 0d d6 02 01 00 00 00  |......-.;.......|
00024280  02 00 00 00 2a 0b 31 5f  28 00 00 00 20 00 00 00  |....*.1_(... ...|
00024290  01 00 00 00 41 44 43 30  04 00 08 00 a5 0a 69 09  |....ADC0......i.|

where you clearly see the ADC0 and TDC0 banks.

Stefan
  1970   10 Aug 2020 Ivo SchulthessBug Reportdata missing in runXXXXXX.mid
> Then I'm running out of ideas. Things I would check:
> 
> - Are the file sizes about the same? 
> 
> - When you dump the .mid file, you do you see your bank names? 
> 
> This would tell you if the events are really missing or if mdump would just not find them.
> 
> But I guess without being able to debug the system at ILL I cannot be of any more help. You are the 
> first one reporting such a problem, so it must have to do with your local setup.
> 
> Stefan

So I did a quick check. The file size is about the same (322K and 329K). When I dump the .mid I don't see 
the banks. It only prints two lines with "------ Event# 0 ------" and "------ Event# 1 ------" whereas for 
the file with data I get the two banks with all the data. Our online analyzer also fails to see the banks. 
Is there another way to check what is in the .mid file?

Best,
Ivo
  1969   10 Aug 2020 Stefan RittBug Reportdata missing in runXXXXXX.mid
> Both set to -1. We only have one logging channel. If we run a sequence with a few runs and the 
> same settings, sometimes data is in the .mid file and sometimes it is not.

Then I'm running out of ideas. Things I would check:

- Are the file sizes about the same? 

- When you dump the .mid file, you do you see your bank names? 

This would tell you if the events are really missing or if mdump would just not find them.

But I guess without being able to debug the system at ILL I cannot be of any more help. You are the 
first one reporting such a problem, so it must have to do with your local setup.

Stefan
  1968   10 Aug 2020 Ivo SchulthessBug Reportdata missing in runXXXXXX.mid
> > Dear all
> > 
> > We just started our beam time at ILL and just found yesterday that for certain 
> > settings of our detector the data is not saved into the .mid files. Running "mdump 
> > -l 10" online we see the data coming in as they should. Nevertheless, if we run 
> > "mdump -x runXXXXXX.mid" offline, the data file has no events and the banks are 
> > missing. Any ideas where the data could go lost?
> > 
> > Thanks in advance,
> > Ivo
> 
> Have you checked 
> 
> /Logger/Channels/0/Settings/Event ID = -1
> /Logger/Channels/0/Settings/Trigger mask = -1
> 
> If these settings are not -1, they filter the data stream for certain events and trigger 
> masks.
> 
> Stefan

Good morning Stefan

Both set to -1. We only have one logging channel. If we run a sequence with a few runs and the 
same settings, sometimes data is in the .mid file and sometimes it is not.

Best,
Ivo 
  1967   10 Aug 2020 Stefan RittBug Reportdata missing in runXXXXXX.mid
> Dear all
> 
> We just started our beam time at ILL and just found yesterday that for certain 
> settings of our detector the data is not saved into the .mid files. Running "mdump 
> -l 10" online we see the data coming in as they should. Nevertheless, if we run 
> "mdump -x runXXXXXX.mid" offline, the data file has no events and the banks are 
> missing. Any ideas where the data could go lost?
> 
> Thanks in advance,
> Ivo

Have you checked 

/Logger/Channels/0/Settings/Event ID = -1
/Logger/Channels/0/Settings/Trigger mask = -1

If these settings are not -1, they filter the data stream for certain events and trigger 
masks.

Stefan
  1966   10 Aug 2020 Ivo SchulthessBug Reportdata missing in runXXXXXX.mid
Dear all

We just started our beam time at ILL and just found yesterday that for certain 
settings of our detector the data is not saved into the .mid files. Running "mdump 
-l 10" online we see the data coming in as they should. Nevertheless, if we run 
"mdump -x runXXXXXX.mid" offline, the data file has no events and the banks are 
missing. Any ideas where the data could go lost?

Thanks in advance,
Ivo
  1965   07 Aug 2020 Konstantin OlchanskiInfoupdate of MYSQL history documentation
I updated the documentation for setting up a MYSQL (MariaDB) database for 
recording MIDAS history: https://midas.triumf.ca/MidasWiki/index.php/History_System#Write_MYSQL-history_events

One thing to note: the "writer" user must have the "INDEX" permission, otherwise 
many things will not work correctly.

Included are the instructions for importing exiting *.hst history files into the 
SQL database: mh2sql --mysql mysql_writer.txt *.hst

Let me know if there is interest in adding support for writing into Postgres SQL 
database. We used to support both MySQL and Postgres through the ODBC library, 
but in the new code, each database has to be supported through it's native API. 
There is code for SQLITE, MYSQL, but no code for Postgres, although it is not too 
hard to add.

K.O.
  1964   15 Jul 2020 Stefan RittInfoMinimal CMakeLists.txt for your midas front-end
Since a few people asked me, here is a "minimal" CMakeLists.txt file for a user-written front-end 
program "myfe":

---------------------------

cmake_minimum_required(VERSION 3.0)
project(myfe)

# Check for MIDASSYS environment variable
if (NOT DEFINED ENV{MIDASSYS})
   message(SEND_ERROR "MIDASSYS environment variable not defined.")
endif()

set(CMAKE_CXX_STANDARD 11)
set(MIDASSYS $ENV{MIDASSYS})

if (${CMAKE_SYSTEM_NAME} MATCHES Linux)
   set(LIBS -lpthread -lutil -lrt)
endif()

add_executable(myfe myfe.cxx)

target_include_directories(myfe PRIVATE ${MIDASSYS}/include)
target_link_libraries(crfe ${MIDASSYS}/lib/libmfe.a ${MIDASSYS}/lib/libmidas.a ${LIBS})
  1963   15 Jul 2020 Stefan RittInfoMakefile update
Please note that you can also compile midas in the standard cmake way with

$ mkdir build
$ cd build
$ cmake ..
$ make install

in the root midas directory. You might have to use "cmake3" on some systems.

Stefan
  1961   28 Jun 2020 Konstantin OlchanskiInfoMakefile update
I reworked the MIDAS Makefile to simplify things and to remove redundancy with functions 
provided by cmake.

When you say "make", the list of options is printed.

The first and main options are "make cmake" and "make cclean" to run the cmake build.

This is my recommended way to build midas - the output of "make cmake" was tuned to provide 
the information need to debug build problems (all compiler commands, command line switches 
and file paths are reported). (normal "cmake VERBOSE=1" is tuned for debugging of cmake and 
for maximum obfuscation of problems building the actual project).

Build options are implemented through cmake variables:

options that can be added to "make cmake":
      NO_LOCAL_ROUTINES=1 NO_CURL=1
      NO_ROOT=1 NO_ODBC=1 NO_SQLITE=1 NO_MYSQL=1 NO_SSL=1 NO_MBEDTLS=1
      NO_EXPORT_COMPILE_COMMANDS=1

for example "make cmake NO_ROOT=1" to disable auto-detection of ROOT.

Two more make targets create reduced builds of midas:

"make mini" builds a subset of midas suitable for building frontend programs. Big programs 
like mlogger and mhttpd are excluded, optional components like CURL or SQLITE are not needed.

"make remoteonly" builds a subset of midas suitable for building remotely connected 
frontends. Big parts of midas are excluded, many system-dependent functions are excluded, 
etc. This is intended for embedded applications, such as fpga, uclinux, etc.

But wait, there is more. Here is the full list:

daqubuntu:midas$ make
Usage:

   make cmake     --- full build of midas
   make cclean    --- remove everything build by make cmake

   options that can be added to "make cmake":
      NO_LOCAL_ROUTINES=1 NO_CURL=1
      NO_ROOT=1 NO_ODBC=1 NO_SQLITE=1 NO_MYSQL=1 NO_SSL=1 NO_MBEDTLS=1
      NO_EXPORT_COMPILE_COMMANDS=1

   make dox       --- run doxygen, results are in ./html/index.html
   make cleandox  --- remove doxygen output

   make htmllint  --- run html check on resources/*.html

   make test      --- run midas self test

   make mbedtls   --- enable mhttpd support for https via the mbedtls https library
   make update_mbedtls --- update mbedtls to latest version
   make clean_mbedtls  --- remove mbedtls from this midas build

   make mtcpproxy --- build the https proxy to forward root-only port 443 to mhttpd https 
port 8443

   make mini      --- minimal build, results are in linux/{bin,lib}
   make cleanmini --- remove everything build by make mini

   make remoteonly      --- minimal build, remote connetion only, results are in linux-
remoteonly/{bin,lib}
   make cleanremoteonly --- remove everything build by make remoteonly

   make linux32   --- minimal x86 -m32 build, results are in linux-m32/{bin,lib}
   make clean32   --- remove everything built by make linux32

   make linux64   --- minimal x86 -m64 build, results are in linux-m64/{bin,lib}
   make clean64   --- remove everything built by make linux64

   make linuxarm  --- minimal ARM cross-build, results are in linux-arm/{bin,lib}
   make cleanarm  --- remove everything built by make linuxarm

   make clean     --- run all 'clean' commands

daqubuntu:midas$ 

K.O.
  1960   28 Jun 2020 Konstantin OlchanskiInfomhttpd https support openssl -> mbedtls
To add. Using https with either openssl or mbedtls requires obtaining an https certificate. This can be self-
signed, or signed by a higher authority, or issued by the "let's encrypt" project.

mhttpd is looking for this certificate in the file ssl_cert.pem.

If this file does not exist, mhttpd will print the instructions for creating it using openssl (self-signed) or 
using certbot (instantaneously and automatically issued let's encrypt certificate).

The certbot route is recommended:

1) (as root) setup certbot (i.e. see my CentOS and Ubuntu instructions on DAQWiki)
2) (as root) copy /etc/letsencrypt/live/$HOME/fullchain.pem and privkey.pem to $MIDASSYS
3) cat fullchain.pem privkey.pem > ssl_cert.pem
4) start mhttpd, watch the first few lines it prints to confirm it found the right certificate file.

The only missing piece for using this in production is lack of integration
with certbot automatic certificate renewal:

- a script has to run for steps (2) and (3) above
- mhttpd has to tell openssl/mbedtls to reload the certificate file (alternative is to automatically restart 
mhttpd, bad!).

As an alternative, we can wait for the mongoose web server library and for the mbedtls crypto library to "grow" 
certbot-style automatic certificate renewal features. (unavoidable, in my view).

K.O.
  1959   28 Jun 2020 Konstantin OlchanskiInfomhttpd https support openssl -> mbedtls
For password protection of midas web pages, https is required, good old http 
with passwords transmitted in-the-clear is no longer considered secure. Latest 
recommendation is to run mhttpd behind an industry-standard https proxy, for 
example apache httpd. These proxies provide built-in password protection and 
have integration with certbot to provide automatic renewal of https 
certificates.

That said, for a long time now mhttpd provides native https support through the 
mongoose web server library and the openssl cryptography library.

Unfortunately, for years now, we have been running into trouble with the midas 
build process bombing out due to inconsistent versions and locations of system-
provided and user-installed openssl libraries. Despite our best efforts (and 
through the switch to cmake!) these problems keep coming back and coming back.

Luckily, latest versions of mongoose support the mbedtls cryptography library. I 
have tested it and it works well enough for me to switch the MIDAS default build 
from "openssl if found" to "mbedtls if-asked-for-by-user".

Starting with commit e7b02f9, cmake builds do not look for and do not try to use 
openssl. mhttpd is built without support for https. This is consistent with the 
recommendation to run it behind an apache httpd password protected https proxy.

To enable https support using mbedtls, run "make mbedtls". This will "git clone" 
the mbedtls library and add it to the midas build. mhttpd will be built with 
https support enabled.

To disable mbedtls support, use "make cmake NO_MBEDTLS=1" or run "make 
clean_mbedtls" (this will remove the mbedtls sources from the midas build).

To restore previous use of openssl, set the cmake variable "USE_OPENSSL".

In my test, mhttpd with https through mbedtls and a letsencrypt certificate gain 
a score of "A" from SSLlabs. (very good).

(you have to use progs/mtcproxy to run this test - SSLlabs only probe port 443 
and mtcproxy will forward it to mhttpd port 8443. to build, run "make 
mtcpproxy").

References:
https://github.com/cesanta/mongoose
https://github.com/ARMmbed/mbedtls

K.O.
  1958   24 Jun 2020 Stefan RittInfoNew image history system available
I'm happy to report that the Corona Lockdown in Europe also had some positive side 
effects: Finally I found time to implement an image history system in midas, 
something I wanted to do since many years, but never found time for that.

The idea is that you can incorporate any network-connected WebCam into the midas 
history system. You specify an update interval (like one minute) and the logger 
fetches regularly images from that webcam. The images are stored as raw files in 
the midas history directory, and can be retrieved via the web browser similarly to 
the "normal" history. Attached is an image from the MEG Experiment at PSI to give 
you some idea.

The cool thing now is that you can go "backwards" in time and browse all stored 
images. The buttons at each image allow you to step backward, forward, and play a 
movie of images, forward or backward. You can query for a certain date/time and 
download a specific image to your local disk. You can even synchronize all time 
axes, drag left and right on each image to see your experiment from different 
cameras at the same time stamps. You see a blue ribbon below each image which shows 
time stamps for which an image is available. 

Initially, only the most recent image is loaded to speed up loading time. As soon 
as you click on the image or one of the arrow buttons, previous images are loaded 
progressively, which you can see in the ribbon bar becoming blue. For slow internet 
connections this can take some time. For typical webcams and one minute update 
period you get typically a few GB per week.

To make this happen, you define a new ODB subtree 

/History/Images/<name>/
  Name:          Name of Camera
  Enabled:       Boolean to enable readout of camera
  URL            URL to fetch an image from the camera
  Period         Time period in seconds to fetch a new image
  Storage hours  Number of hours to store the images (0 for infinite)
  Extension      Image file extension, usually ".jpg" or ".png"
  Timescale      Initial horizontal time scale (like 8h)

The tricky part is to obtain the URL from your camera. For some cameras you can get 
that from the manual, others you have to "hack": Display an image in your browser 
using the camera's internal web interface, inspect the source code of your web page 
and you get the URL. For AXIS cameras I use, the URL is typically

http://<name>/axis-cgi/jpg/image.cgi

For the Netatmo cameras I have at home (which I used during development in my home 
office), the procedure is more complicated, but you can google it. The logger is 
now linked against the CURL library to fetch images, so it also support https://. 
If libcurl is not installed on your system, the image history functionality will be 
disabled.

I tested the system for a few days now and it seem stable, which however does not 
mean that it is bug-free. So please report back any issue. The change is committed 
to the current develop branch.

I hope this extension helps all those people who are forced to do more remote 
monitoring of experiment during these times.

Best,
Stefan
  1957   24 Jun 2020 Marius KoeppelSuggestionODB++ API - documantion updates and odb view after key creation
Hi Stefan,

now everything works well (Tested on: OpenSuse and Arch Linux) :) 

Thank you for the fix.

Cheers,
Marius

> Hi Marius,
> 
> thanks for your help, you identified the problematic location. I changed that to
> 
>    u_odb(bool v) : m_tid{TID_BOOL}, m_parent_odb{nullptr} {m_string = nullptr; m_bool = v;};
> 
> which should initialize the full 8 bytes of the u_odb union. I committed to develop. Can you 
> please give it a try?
> 
> Best,
> Stefan
> 
> 
> > and looking at 
> > 
> > odbxx.h:
> >         u_odb(bool v) : m_bool{v}, m_tid{TID_BOOL}, m_parent_odb{nullptr} {};   
> > 
> > only m_bool is set for this instance meaning that only the first byte gets a value 
> > (still having only 1 byte for bool in c++). If I check m_string inside the u_odb::get function
> >  of this instance I am getting for a bool (I set false) stuff like 0x7f6633f67a00 and for an int 
> > (I set the int to 1000) 0x7f66000003e8. Since the size of BOOL is larger I am getting the 
> > wrong value. I checked this also on openSUSE having the same behavior.
  1956   23 Jun 2020 Stefan RittSuggestionODB++ API - documantion updates and odb view after key creation
Hi Marius,

thanks for your help, you identified the problematic location. I changed that to

   u_odb(bool v) : m_tid{TID_BOOL}, m_parent_odb{nullptr} {m_string = nullptr; m_bool = v;};

which should initialize the full 8 bytes of the u_odb union. I committed to develop. Can you 
please give it a try?

Best,
Stefan


> and looking at 
> 
> odbxx.h:
>         u_odb(bool v) : m_bool{v}, m_tid{TID_BOOL}, m_parent_odb{nullptr} {};   
> 
> only m_bool is set for this instance meaning that only the first byte gets a value 
> (still having only 1 byte for bool in c++). If I check m_string inside the u_odb::get function
>  of this instance I am getting for a bool (I set false) stuff like 0x7f6633f67a00 and for an int 
> (I set the int to 1000) 0x7f66000003e8. Since the size of BOOL is larger I am getting the 
> wrong value. I checked this also on openSUSE having the same behavior.
  1955   19 Jun 2020 Isaac Labrie-BoulayInfoBuilding/running a Frontend Task
To build a frontend task, the user code and system code are compiled and linked 
together with the required libraries, by running a Makefile (e.g. 
../midas/examples/experiment/Makefile in the MIDAS package).

I tried building the CAMAC example frontend and I get this error:

g++: error: /home/rcmp/packages/midas/drivers/camac/ces8210.c: No such file or 
directory
g++: error: /home/rcmp/packages/midas/linux/lib/libmidas.a: No such file or 
directory
make: *** [camac_init.exe] Error 1

Obviously, I'm running the "make all" command from the camac directory. Why 
would I get this "no such file" error? Do I need to download the MIDAS packages 
inside my experiment directory?

Thanks for helping me out.

Isaac
  1954   18 Jun 2020 Stefan RittForumODB key length
No. But if you need more than 32 characters, you do something wrong. The 
information you want to put into the ODB key name should probably be stored in 
another string key or so.

Stefan


> Hello,
> 
> I have a question about length of the name of ODB key.
> Is it possible to create an ODB key containing more than 32 characters?
> 
> Thanks.
> Ruslan
  1953   18 Jun 2020 Ruslan PodviianiukForumODB key length
Hello,

I have a question about length of the name of ODB key.
Is it possible to create an ODB key containing more than 32 characters?

Thanks.
Ruslan
  1951   16 Jun 2020 Marius KoeppelSuggestionODB++ API - documantion updates and odb view after key creation
Hi Stefan,

I played around with the code a bit more and I found out that if I do:

midas::odb test_settings = {{"Enable", false}};
test_settings.connect("/Equipment/Test/Test", true);

The correct value ends up in the odb. In this case an u_odb instance is created
with a clean m_string. But if I run the other code an odb instanceo is created and
the values of m_data are set in

odbxx.h:
        odb(std::initializer_list<std::pair<const char *, midas::odb>> list) : odb() {...

This values are comming from u_odb instances since the code does:

odbxx.h:
        auto o = new midas::odb(element.second);

and then

odbxx.h:
        odb(T v):odb() {                                                                                                                                                                                                                                   
           m_num_values = 1;                                                                                                                                                                                                             
           m_data = new u_odb[1]{v};                                                                                                                                                                                                                
           m_tid = m_data[0].get_tid();                                                            
           m_data[0].set_parent(this);                                                         
        }

and looking at 

odbxx.h:
        u_odb(bool v) : m_bool{v}, m_tid{TID_BOOL}, m_parent_odb{nullptr} {};   

only m_bool is set for this instance meaning that only the first byte gets a value 
(still having only 1 byte for bool in c++). If I check m_string inside the u_odb::get function
 of this instance I am getting for a bool (I set false) stuff like 0x7f6633f67a00 and for an int 
(I set the int to 1000) 0x7f66000003e8. Since the size of BOOL is larger I am getting the 
wrong value. I checked this also on openSUSE having the same behavior.

Like you I am not getting this problem on my Mac. What compiler flags do you use on your Mac?

Cheers,
Marius
ELOG V3.1.4-2e1708b5