Back Midas Rome Roody Rootana
  Midas DAQ System, Page 12 of 45  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
Entry  28 Aug 2019, Nick Hastings, Forum, History plot problems for frontend with multiple indicies fedummy.cxxMakefile
Hello experts,

I have been writing a SC frontend for a powersupply. I have used the model 
where the frontend can be started with "-i n" option so that each fe can 
control a different supply. During the development/testing of the program I 
would normally only run a single instance with "-i 1". However when I started
a second instance with "-i 2" I found problems with the history plots that
were being made for the original "-i 1" instance. The variable being plotted
seemed to randomly jump between the value from the "-i 1" instance and 
the "-i 2" instance.  confirmed that the "correct" values exist for each 
frontend in the odb under /Equipment/Foo01/Variables and 
/Equipment/Foo02/Variables

This is also not just a plotting artifact since I was also
able to see the two different values by running mhist.

I saw this behaviour using midas-2019-03 and also the head of the development
branch (686e4de2b55023b0d1936c60bcf4767c5e6caac0 from just under 48 hours ago). 

I was able to reproduce this with a stripped down frontend that just 
sets a variable that is equal to its frontend_index. Please find the code 
and Makefile attached. Presumably I've done something wrong in my 
implementation that hopefully a more experienced person can spot quite 
quickly, but please let me know if any more information is needed.

I have seen this behaviour on both Debian 10 and on a CentOS 7 Singularity 
image running on top of Debian 10.

Thanks,

Nick.

P.S. I made the topic of this post "Forum" and not "Bug Report" since I
expect the root of this problem is somewhere between the keyboard and chair.
    Reply  28 Aug 2019, Stefan Ritt, Forum, History plot problems for frontend with multiple indicies 
My first question would be why are you using several font-ends at all? That makes things more 
complicated than needed. In the normal FE framework, you can define either several equipment 
served by one frontend, or even one equipment linked to several devices. In the MEG experiment 
we have one slow control frontend controlling ~100 devices without problem. In the old days there 
was a problem that some slow devices could throttle the readout, but since the invention of multi-
threaded slow control equipment, each device gets its own thread so they don't block each other.

Stefan
       Reply  28 Aug 2019, Nick Hastings, Forum, History plot problems for frontend with multiple indicies 
Hi Stefan,

thanks for you quick reply.

> My first question would be why are you using several font-ends at all?

Becuase I was following the model used for many of the frontends for the ND280 FGD.

> That makes things more 
> complicated than needed. In the normal FE framework, you can define either several equipment 
> served by one frontend, or even one equipment linked to several devices. In the MEG experiment 
> we have one slow control frontend controlling ~100 devices without problem. In the old days 
there 
> was a problem that some slow devices could throttle the readout, but since the invention of 
multi-
> threaded slow control equipment, each device gets its own thread so they don't block each 
other.

Perhaps things have changed in the 10 years since the FGD SC code was written. I can do it 
differently but doing it that way seemed naturual since around 90% of the frontend code that I
have see does it that way.

Nick.
          Reply  28 Aug 2019, lcp, Forum, History plot problems for frontend with multiple indicies 
hi, 

> > That makes things more 
> > complicated than needed. In the normal FE framework, you can define either several equipment 
> > served by one frontend, or even one equipment linked to several devices. In the MEG experiment 
> > we have one slow control frontend controlling ~100 devices without problem. In the old days 
> there 
> > was a problem that some slow devices could throttle the readout, but since the invention of 
> multi-
> > threaded slow control equipment, each device gets its own thread so they don't block each 
> other.
> 

I agree with Stefan, that it's probably better to run a multi-threaded setup, than individual frontends.

The only place I've ever used the frontend index on startup is when I was testing and building
an eventbuilder.

https://midas.triumf.ca/MidasWiki/index.php/Event_Builder#Example

This might explain, why your history is swapping between frontends, as in the event builder, it gets
reconstructed.

Maybe this helps...

LCP


> Perhaps things have changed in the 10 years since the FGD SC code was written. I can do it 
> differently but doing it that way seemed naturual since around 90% of the frontend code that I
> have see does it that way.
             Reply  16 Sep 2019, Konstantin Olchanski, Forum, History plot problems for frontend with multiple indicies 
> it's probably better to run a multi-threaded setup, than individual frontends.

I recommend against using multiple threads if at all possible and unless absolutely required.

Only for one reason: multithreaded c++ programs are notoriously hard to debug.

In addition, one has to face several classes of bugs absent in single-threaded applications:

a) which thread "owns" which object
b) locking of all shared data
c) huge overheads from locking at high data rates (a performance bug)
d) correct locking order, dead locks, live locks
e) incomprehensible core dumps and stack traces
f) race conditions

To control 2 power supplies, run 2 frontend programs, 1 per power supply.

To control 64 frontend cards, run 1 frontend with many threads: 64 (per device) + 1 (main thread) + 1 (RPC handler) + 1 
(watchdog) + 1 (common event generator/data transmitter) + 1 (odb/web page status update). You *will* bump into each 
and every one of the problems (a) to (f) above.

K.O.
       Reply  16 Sep 2019, Konstantin Olchanski, Forum, History plot problems for frontend with multiple indicies 
> My first question would be why are you using several font-ends at all? That makes things more 
> complicated than needed. In the normal FE framework, you can define either several equipment 
> served by one frontend, or even one equipment linked to several devices.

I am the culprit here, as I wrote the original code for T2K/ND280 that Nick is looking at.

At the time, we needed to control multiple units of identical equipment. Most of these equipments
needed to be controlled independently from each other, so we could not/did not want to use
one single frontend executable to control all of them at the same time. For example, for equipment
not in use, we can stop the corresponding frontend. In case of trouble, we can restart
the corresponding frontend without disrupting the frontends for the other equipments.

The successful operation of the T2K/ND280 experiment is sufficient defence for the validity of this approach.

One lesson learned was that the MIDAS frontend framework did not make it easy to have multiple identical frontends 
for controlling multiple identical equipments. (typical use is control of 2-3 Wiener power supplies, 1-2-3 UPS 
devices, etc). At the time (and today), only the "i NNN" flag is available to tell the frontend "who am I?". To make it 
work, one has to use the hard to "%02d" stuff in the equipment name, and there are other complications. For my 
"next generation" of frontends, I tried to specialize the frontend executables at compile time using C/C++ 
preprocessor defines (-Dwiener01, -Dwiener02, etc), this worked better, but still not super happy.

My current solution as implemented by the tmfe frontend framework is to give the user full control
over the command line arguments (mfe.c did not permit any "user arguments" and did not allow
access to argc/argv) and full control over the equipment names (mfe.c equipment names are fixed at compile time).

K.O.
    Reply  29 Aug 2019, Ben Smith, Forum, History plot problems for frontend with multiple indicies 
Hi Nick,

I confirm that this issue appears when using the MIDAS history driver. The issue does not appear when using the MYSQL history driver.

One solution is to give each frontend instance a different Event ID (see example code below for doing this in frontend_init). The history system did still seem to be confused by the existing FeDummy equipments/events even after making this change. However, after changing EQ_NAME from FeDummy to FeDum (i.e. starting from a clean state history-wise) things behaved normally.

I will note that some experiments definitely have a need for the "-i" method, especially those that run on distributed clusters.

Ben


```
INT frontend_init()
{
   sprintf(eq_name, "%s%02d", EQ_NAME, get_frontend_index());

   // Ensure each FE gets a different Event ID in the history system (951, 952 etc)
   char keyname[128];
   HNDLE hkey;
   int status;
   sprintf(keyname, "/Equipment/%s/Common/Event ID", eq_name);
   status = db_find_key(hDB, 0, keyname, &hkey);
   if (status != DB_SUCCESS) abort();

   WORD new_ev_id = 950 + get_frontend_index();
   status = db_set_data_index(hDB, hkey, &new_ev_id, 2, 0, TID_WORD);
   if (status != DB_SUCCESS) abort();
   return SUCCESS;
}
```
       Reply  01 Sep 2019, Nick Hastings, Forum, History plot problems for frontend with multiple indicies 
Hi Ben,

thanks for your reply. I can confirm that your suggested workaround does indeed
make the problem dissapear.

I guess this issue hasn't been seen at T2K since we use MYSQL for the history.

Thanks,

Nick.
          Reply  16 Sep 2019, Konstantin Olchanski, Forum, History plot problems for frontend with multiple indicies 
> thanks for your reply. I can confirm that your suggested workaround does indeed
> make the problem dissapear.
> I guess this issue hasn't been seen at T2K since we use MYSQL for the history.

I think you found the source of the problem, confused event id assignments. To confirm,
can you email me (or post here) the output of odbedit "ls -l /History/Events".

If that's the problem, you can avoid it completely by switching to a history storage method
that does not rely on magic mapping between equipment names and numeric event id's:
try the "FILE" method (set odb /Logger/History/FILE/Active to "y", restart the logger) or
the "MYSQL" method (you will need to setup a mysql database). You tell mhttpd and mhist which 
history data to read by setting ODB /History/LoggerHistoryChannel to one of the channel names 
from /logger/history/, restart mhttpd. (mhttpd and mhist used to print a message "reading history 
data from channel XXX", but somebody removed this message).

K.O.
             Reply  16 Sep 2019, Nick Hastings, Forum, History plot problems for frontend with multiple indicies 
Hi Konstantin,

thanks for your reply.

> > thanks for your reply. I can confirm that your suggested workaround does indeed
> > make the problem dissapear.
> > I guess this issue hasn't been seen at T2K since we use MYSQL for the history.
> 
> I think you found the source of the problem, confused event id assignments. To confirm,
> can you email me (or post here) the output of odbedit "ls -l /History/Events".

Sorry, do you want this for after I've applied the fix suggested by Ben or with the original code 
that I posted.

With the original code it only shows one fe even though both are running:

[local:e666:S]History>ls -l /History/Events
Key name                        Type    #Val  Size  Last Opn Mode Value
---------------------------------------------------------------------------
1                               STRING  1     10    2m   0   RWD  FeDummy02
0                               STRING  1     16    2m   0   RWD  Run transitions

[local:e666:S]History> scl
Name                Host
mhttpd              localhost           
fedummy01           localhost           
fedummy02           localhost           
ODBEdit             localhost           
Logger              localhost           
[local:e666:S]History>ls "/History/Display/Default/Dummy/
Timescale                       1h
Zero ylow                       n
Show run markers                y
Show values                     y
Sort Vars                       n
Log axis                        n
Minimum                         0
Maximum                         0
Variables
                                FeDummy01:Data
                                FeDummy02:Data
Label
                                
                                
Colour
                                #00AAFF
                                #FF9000
Factor
                                0
                                0
Offset
                                0
                                0
Buttons
                                10m
                                1h
                                3h
                                12h
                                24h
                                3d
                                7d
Formula
                                
                                
Show old vars                   n

> If that's the problem, you can avoid it completely by switching to a history storage method
> that does not rely on magic mapping between equipment names and numeric event id's:
> try the "FILE" method (set odb /Logger/History/FILE/Active to "y", restart the logger) or
> the "MYSQL" method (you will need to setup a mysql database). You tell mhttpd and mhist which 
> history data to read by setting ODB /History/LoggerHistoryChannel to one of the channel names 
> from /logger/history/, restart mhttpd. (mhttpd and mhist used to print a message "reading history 
> data from channel XXX", but somebody removed this message).

Using the orginal code I posted and switching from MIDAS history to FILE history did not seem to 
change the random behaviour in the history plots.

Regards,

Nick.
                Reply  17 Sep 2019, Konstantin Olchanski, Forum, History plot problems for frontend with multiple indicies 
> [local:e666:S]History>ls -l /History/Events
> Key name                        Type    #Val  Size  Last Opn Mode Value
> ---------------------------------------------------------------------------
> 1                               STRING  1     10    2m   0   RWD  FeDummy02
> 0                               STRING  1     16    2m   0   RWD  Run transitions

Something is very broken. There should be more entries here, at least
there should be entries for "FeDummy01" and usually there is also an entry
for "FeDummy" because one invariably runs fedummy without "-i" at least once.

The fact that changing from "midas" storage to "file" storage makes no difference
also indicates that something is very broken.

I want to debug this.

Since you tried the "file" storage, can you send me the output of "ls -l mhf*.dat" in the directory
with the history files? (it should have the "*.hst" files from the "midas" storage and "mhf*.dat" files
from the "file" storage.

K.O.
                   Reply  18 Sep 2019, Nick Hastings, Forum, History plot problems for frontend with multiple indicies 
Hi Konstantin,

> > [local:e666:S]History>ls -l /History/Events
> > Key name                        Type    #Val  Size  Last Opn Mode Value
> > ---------------------------------------------------------------------------
> > 1                               STRING  1     10    2m   0   RWD  FeDummy02
> > 0                               STRING  1     16    2m   0   RWD  Run transitions
> 
> Something is very broken. There should be more entries here, at least
> there should be entries for "FeDummy01" and usually there is also an entry
> for "FeDummy" because one invariably runs fedummy without "-i" at least once.

This is a fresh experiment that I started just to test this this issue, that is why there are not many 
entries in /History/Events. I agree though that we should expect to see a FeDummy01 entry.
 
> The fact that changing from "midas" storage to "file" storage makes no difference
> also indicates that something is very broken.
> 
> I want to debug this.
> 
> Since you tried the "file" storage, can you send me the output of "ls -l mhf*.dat" in the directory
> with the history files? (it should have the "*.hst" files from the "midas" storage and "mhf*.dat" 
files
> from the "file" storage.

When I started this experiment yesterday(?) I disabled the Midas history when I enbled the file 
history. Jsut now I reenabled the Midas history, so they are currently both active.

% ls -l work/online/{*.hst,mhf*.dat}
-rw-r--r-- 1 hastings hastings  14996 Sep 17 10:21 work/online/190917.hst
-rw-r--r-- 1 hastings hastings   3292 Sep 18 16:29 work/online/190918.hst
-rw-r--r-- 1 hastings hastings 867288 Sep 18 16:29 work/online/mhf_1568683062_20190917_fedummy01.dat
-rw-r--r-- 1 hastings hastings 867288 Sep 18 16:29 work/online/mhf_1568683062_20190917_fedummy02.dat
-rw-r--r-- 1 hastings hastings    166 Sep 17 10:17 
work/online/mhf_1568683062_20190917_run_transitions.dat

And again, just as a sanity check:

% odbedit -c 'ls -l /History/Events'
Key name                        Type    #Val  Size  Last Opn Mode Value
---------------------------------------------------------------------------
1                               STRING  1     10    1m   0   RWD  FeDummy02
0                               STRING  1     16    1m   0   RWD  Run transitions

Regards,

Nick.
                      Reply  27 Sep 2019, Konstantin Olchanski, Forum, History plot problems for frontend with multiple indicies 
We should fix this for midas-2019-10.

https://bitbucket.org/tmidas/midas/issues/193/confusion-in-history-event-ids

K.O.





> Hi Konstantin,
> 
> > > [local:e666:S]History>ls -l /History/Events
> > > Key name                        Type    #Val  Size  Last Opn Mode Value
> > > ---------------------------------------------------------------------------
> > > 1                               STRING  1     10    2m   0   RWD  FeDummy02
> > > 0                               STRING  1     16    2m   0   RWD  Run transitions
> > 
> > Something is very broken. There should be more entries here, at least
> > there should be entries for "FeDummy01" and usually there is also an entry
> > for "FeDummy" because one invariably runs fedummy without "-i" at least once.
> 
> This is a fresh experiment that I started just to test this this issue, that is why there are not many 
> entries in /History/Events. I agree though that we should expect to see a FeDummy01 entry.
>  
> > The fact that changing from "midas" storage to "file" storage makes no difference
> > also indicates that something is very broken.
> > 
> > I want to debug this.
> > 
> > Since you tried the "file" storage, can you send me the output of "ls -l mhf*.dat" in the directory
> > with the history files? (it should have the "*.hst" files from the "midas" storage and "mhf*.dat" 
> files
> > from the "file" storage.
> 
> When I started this experiment yesterday(?) I disabled the Midas history when I enbled the file 
> history. Jsut now I reenabled the Midas history, so they are currently both active.
> 
> % ls -l work/online/{*.hst,mhf*.dat}
> -rw-r--r-- 1 hastings hastings  14996 Sep 17 10:21 work/online/190917.hst
> -rw-r--r-- 1 hastings hastings   3292 Sep 18 16:29 work/online/190918.hst
> -rw-r--r-- 1 hastings hastings 867288 Sep 18 16:29 work/online/mhf_1568683062_20190917_fedummy01.dat
> -rw-r--r-- 1 hastings hastings 867288 Sep 18 16:29 work/online/mhf_1568683062_20190917_fedummy02.dat
> -rw-r--r-- 1 hastings hastings    166 Sep 17 10:17 
> work/online/mhf_1568683062_20190917_run_transitions.dat
> 
> And again, just as a sanity check:
> 
> % odbedit -c 'ls -l /History/Events'
> Key name                        Type    #Val  Size  Last Opn Mode Value
> ---------------------------------------------------------------------------
> 1                               STRING  1     10    1m   0   RWD  FeDummy02
> 0                               STRING  1     16    1m   0   RWD  Run transitions
> 
> Regards,
> 
> Nick.
                         Reply  24 Aug 2020, Konstantin Olchanski, Forum, History plot problems for frontend with multiple indicies 
This turned out to be a tricky problem. I am adding a warning about it in mlogger. This should go into midas-
2020-07. Closing bug #193. K.O.


> We should fix this for midas-2019-10.
> 
> https://bitbucket.org/tmidas/midas/issues/193/confusion-in-history-event-ids
> 
> K.O.
> 
> 
> 
> 
> 
> > Hi Konstantin,
> > 
> > > > [local:e666:S]History>ls -l /History/Events
> > > > Key name                        Type    #Val  Size  Last Opn Mode Value
> > > > ---------------------------------------------------------------------------
> > > > 1                               STRING  1     10    2m   0   RWD  FeDummy02
> > > > 0                               STRING  1     16    2m   0   RWD  Run transitions
> > > 
> > > Something is very broken. There should be more entries here, at least
> > > there should be entries for "FeDummy01" and usually there is also an entry
> > > for "FeDummy" because one invariably runs fedummy without "-i" at least once.
> > 
> > This is a fresh experiment that I started just to test this this issue, that is why there are not many 
> > entries in /History/Events. I agree though that we should expect to see a FeDummy01 entry.
> >  
> > > The fact that changing from "midas" storage to "file" storage makes no difference
> > > also indicates that something is very broken.
> > > 
> > > I want to debug this.
> > > 
> > > Since you tried the "file" storage, can you send me the output of "ls -l mhf*.dat" in the directory
> > > with the history files? (it should have the "*.hst" files from the "midas" storage and "mhf*.dat" 
> > files
> > > from the "file" storage.
> > 
> > When I started this experiment yesterday(?) I disabled the Midas history when I enbled the file 
> > history. Jsut now I reenabled the Midas history, so they are currently both active.
> > 
> > % ls -l work/online/{*.hst,mhf*.dat}
> > -rw-r--r-- 1 hastings hastings  14996 Sep 17 10:21 work/online/190917.hst
> > -rw-r--r-- 1 hastings hastings   3292 Sep 18 16:29 work/online/190918.hst
> > -rw-r--r-- 1 hastings hastings 867288 Sep 18 16:29 work/online/mhf_1568683062_20190917_fedummy01.dat
> > -rw-r--r-- 1 hastings hastings 867288 Sep 18 16:29 work/online/mhf_1568683062_20190917_fedummy02.dat
> > -rw-r--r-- 1 hastings hastings    166 Sep 17 10:17 
> > work/online/mhf_1568683062_20190917_run_transitions.dat
> > 
> > And again, just as a sanity check:
> > 
> > % odbedit -c 'ls -l /History/Events'
> > Key name                        Type    #Val  Size  Last Opn Mode Value
> > ---------------------------------------------------------------------------
> > 1                               STRING  1     10    1m   0   RWD  FeDummy02
> > 0                               STRING  1     16    1m   0   RWD  Run transitions
> > 
> > Regards,
> > 
> > Nick.
Entry  12 Aug 2020, Yan Liu, Suggestion, adding db_get_mode ti check access mode for keys 
Hello,

I am wondering if there is a function that checks the access mode for a key? I 
found the db_set_mode() function that allows me to set the access mode for a key, 
but failed to find its counterpart get function.

Thanks in advance,
Yan
    Reply  13 Aug 2020, Stefan Ritt, Suggestion, adding db_get_mode ti check access mode for keys 
> Hello,
> 
> I am wondering if there is a function that checks the access mode for a key? I 
> found the db_set_mode() function that allows me to set the access mode for a key, 
> but failed to find its counterpart get function.
> 
> Thanks in advance,
> Yan


  KEY k;
  db_get_key(hDB, handle, &k);
  std::cout << k.access_mode << std::endl;

/Stefan
       Reply  13 Aug 2020, Yan Liu, Suggestion, adding db_get_mode ti check access mode for keys 
Thank you!

Yan

> > Hello,
> > 
> > I am wondering if there is a function that checks the access mode for a key? I 
> > found the db_set_mode() function that allows me to set the access mode for a key, 
> > but failed to find its counterpart get function.
> > 
> > Thanks in advance,
> > Yan
> 
> 
>   KEY k;
>   db_get_key(hDB, handle, &k);
>   std::cout << k.access_mode << std::endl;
> 
> /Stefan
Entry  07 Aug 2020, Konstantin Olchanski, Info, update of MYSQL history documentation 
I updated the documentation for setting up a MYSQL (MariaDB) database for 
recording MIDAS history: https://midas.triumf.ca/MidasWiki/index.php/History_System#Write_MYSQL-history_events

One thing to note: the "writer" user must have the "INDEX" permission, otherwise 
many things will not work correctly.

Included are the instructions for importing exiting *.hst history files into the 
SQL database: mh2sql --mysql mysql_writer.txt *.hst

Let me know if there is interest in adding support for writing into Postgres SQL 
database. We used to support both MySQL and Postgres through the ODBC library, 
but in the new code, each database has to be supported through it's native API. 
There is code for SQLITE, MYSQL, but no code for Postgres, although it is not too 
hard to add.

K.O.
Entry  15 Jul 2020, Stefan Ritt, Info, Minimal CMakeLists.txt for your midas front-end 
Since a few people asked me, here is a "minimal" CMakeLists.txt file for a user-written front-end 
program "myfe":

---------------------------

cmake_minimum_required(VERSION 3.0)
project(myfe)

# Check for MIDASSYS environment variable
if (NOT DEFINED ENV{MIDASSYS})
   message(SEND_ERROR "MIDASSYS environment variable not defined.")
endif()

set(CMAKE_CXX_STANDARD 11)
set(MIDASSYS $ENV{MIDASSYS})

if (${CMAKE_SYSTEM_NAME} MATCHES Linux)
   set(LIBS -lpthread -lutil -lrt)
endif()

add_executable(myfe myfe.cxx)

target_include_directories(myfe PRIVATE ${MIDASSYS}/include)
target_link_libraries(crfe ${MIDASSYS}/lib/libmfe.a ${MIDASSYS}/lib/libmidas.a ${LIBS})
Entry  28 Jun 2020, Konstantin Olchanski, Info, Makefile update 
I reworked the MIDAS Makefile to simplify things and to remove redundancy with functions 
provided by cmake.

When you say "make", the list of options is printed.

The first and main options are "make cmake" and "make cclean" to run the cmake build.

This is my recommended way to build midas - the output of "make cmake" was tuned to provide 
the information need to debug build problems (all compiler commands, command line switches 
and file paths are reported). (normal "cmake VERBOSE=1" is tuned for debugging of cmake and 
for maximum obfuscation of problems building the actual project).

Build options are implemented through cmake variables:

options that can be added to "make cmake":
      NO_LOCAL_ROUTINES=1 NO_CURL=1
      NO_ROOT=1 NO_ODBC=1 NO_SQLITE=1 NO_MYSQL=1 NO_SSL=1 NO_MBEDTLS=1
      NO_EXPORT_COMPILE_COMMANDS=1

for example "make cmake NO_ROOT=1" to disable auto-detection of ROOT.

Two more make targets create reduced builds of midas:

"make mini" builds a subset of midas suitable for building frontend programs. Big programs 
like mlogger and mhttpd are excluded, optional components like CURL or SQLITE are not needed.

"make remoteonly" builds a subset of midas suitable for building remotely connected 
frontends. Big parts of midas are excluded, many system-dependent functions are excluded, 
etc. This is intended for embedded applications, such as fpga, uclinux, etc.

But wait, there is more. Here is the full list:

daqubuntu:midas$ make
Usage:

   make cmake     --- full build of midas
   make cclean    --- remove everything build by make cmake

   options that can be added to "make cmake":
      NO_LOCAL_ROUTINES=1 NO_CURL=1
      NO_ROOT=1 NO_ODBC=1 NO_SQLITE=1 NO_MYSQL=1 NO_SSL=1 NO_MBEDTLS=1
      NO_EXPORT_COMPILE_COMMANDS=1

   make dox       --- run doxygen, results are in ./html/index.html
   make cleandox  --- remove doxygen output

   make htmllint  --- run html check on resources/*.html

   make test      --- run midas self test

   make mbedtls   --- enable mhttpd support for https via the mbedtls https library
   make update_mbedtls --- update mbedtls to latest version
   make clean_mbedtls  --- remove mbedtls from this midas build

   make mtcpproxy --- build the https proxy to forward root-only port 443 to mhttpd https 
port 8443

   make mini      --- minimal build, results are in linux/{bin,lib}
   make cleanmini --- remove everything build by make mini

   make remoteonly      --- minimal build, remote connetion only, results are in linux-
remoteonly/{bin,lib}
   make cleanremoteonly --- remove everything build by make remoteonly

   make linux32   --- minimal x86 -m32 build, results are in linux-m32/{bin,lib}
   make clean32   --- remove everything built by make linux32

   make linux64   --- minimal x86 -m64 build, results are in linux-m64/{bin,lib}
   make clean64   --- remove everything built by make linux64

   make linuxarm  --- minimal ARM cross-build, results are in linux-arm/{bin,lib}
   make cleanarm  --- remove everything built by make linuxarm

   make clean     --- run all 'clean' commands

daqubuntu:midas$ 

K.O.
    Reply  15 Jul 2020, Stefan Ritt, Info, Makefile update 
Please note that you can also compile midas in the standard cmake way with

$ mkdir build
$ cd build
$ cmake ..
$ make install

in the root midas directory. You might have to use "cmake3" on some systems.

Stefan
Entry  28 Jun 2020, Konstantin Olchanski, Info, mhttpd https support openssl -> mbedtls 
For password protection of midas web pages, https is required, good old http 
with passwords transmitted in-the-clear is no longer considered secure. Latest 
recommendation is to run mhttpd behind an industry-standard https proxy, for 
example apache httpd. These proxies provide built-in password protection and 
have integration with certbot to provide automatic renewal of https 
certificates.

That said, for a long time now mhttpd provides native https support through the 
mongoose web server library and the openssl cryptography library.

Unfortunately, for years now, we have been running into trouble with the midas 
build process bombing out due to inconsistent versions and locations of system-
provided and user-installed openssl libraries. Despite our best efforts (and 
through the switch to cmake!) these problems keep coming back and coming back.

Luckily, latest versions of mongoose support the mbedtls cryptography library. I 
have tested it and it works well enough for me to switch the MIDAS default build 
from "openssl if found" to "mbedtls if-asked-for-by-user".

Starting with commit e7b02f9, cmake builds do not look for and do not try to use 
openssl. mhttpd is built without support for https. This is consistent with the 
recommendation to run it behind an apache httpd password protected https proxy.

To enable https support using mbedtls, run "make mbedtls". This will "git clone" 
the mbedtls library and add it to the midas build. mhttpd will be built with 
https support enabled.

To disable mbedtls support, use "make cmake NO_MBEDTLS=1" or run "make 
clean_mbedtls" (this will remove the mbedtls sources from the midas build).

To restore previous use of openssl, set the cmake variable "USE_OPENSSL".

In my test, mhttpd with https through mbedtls and a letsencrypt certificate gain 
a score of "A" from SSLlabs. (very good).

(you have to use progs/mtcproxy to run this test - SSLlabs only probe port 443 
and mtcproxy will forward it to mhttpd port 8443. to build, run "make 
mtcpproxy").

References:
https://github.com/cesanta/mongoose
https://github.com/ARMmbed/mbedtls

K.O.
    Reply  28 Jun 2020, Konstantin Olchanski, Info, mhttpd https support openssl -> mbedtls 
To add. Using https with either openssl or mbedtls requires obtaining an https certificate. This can be self-
signed, or signed by a higher authority, or issued by the "let's encrypt" project.

mhttpd is looking for this certificate in the file ssl_cert.pem.

If this file does not exist, mhttpd will print the instructions for creating it using openssl (self-signed) or 
using certbot (instantaneously and automatically issued let's encrypt certificate).

The certbot route is recommended:

1) (as root) setup certbot (i.e. see my CentOS and Ubuntu instructions on DAQWiki)
2) (as root) copy /etc/letsencrypt/live/$HOME/fullchain.pem and privkey.pem to $MIDASSYS
3) cat fullchain.pem privkey.pem > ssl_cert.pem
4) start mhttpd, watch the first few lines it prints to confirm it found the right certificate file.

The only missing piece for using this in production is lack of integration
with certbot automatic certificate renewal:

- a script has to run for steps (2) and (3) above
- mhttpd has to tell openssl/mbedtls to reload the certificate file (alternative is to automatically restart 
mhttpd, bad!).

As an alternative, we can wait for the mongoose web server library and for the mbedtls crypto library to "grow" 
certbot-style automatic certificate renewal features. (unavoidable, in my view).

K.O.
Entry  24 Jun 2020, Stefan Ritt, Info, New image history system available Screenshot_2020-06-24_at_17.21.11_.png
I'm happy to report that the Corona Lockdown in Europe also had some positive side 
effects: Finally I found time to implement an image history system in midas, 
something I wanted to do since many years, but never found time for that.

The idea is that you can incorporate any network-connected WebCam into the midas 
history system. You specify an update interval (like one minute) and the logger 
fetches regularly images from that webcam. The images are stored as raw files in 
the midas history directory, and can be retrieved via the web browser similarly to 
the "normal" history. Attached is an image from the MEG Experiment at PSI to give 
you some idea.

The cool thing now is that you can go "backwards" in time and browse all stored 
images. The buttons at each image allow you to step backward, forward, and play a 
movie of images, forward or backward. You can query for a certain date/time and 
download a specific image to your local disk. You can even synchronize all time 
axes, drag left and right on each image to see your experiment from different 
cameras at the same time stamps. You see a blue ribbon below each image which shows 
time stamps for which an image is available. 

Initially, only the most recent image is loaded to speed up loading time. As soon 
as you click on the image or one of the arrow buttons, previous images are loaded 
progressively, which you can see in the ribbon bar becoming blue. For slow internet 
connections this can take some time. For typical webcams and one minute update 
period you get typically a few GB per week.

To make this happen, you define a new ODB subtree 

/History/Images/<name>/
  Name:          Name of Camera
  Enabled:       Boolean to enable readout of camera
  URL            URL to fetch an image from the camera
  Period         Time period in seconds to fetch a new image
  Storage hours  Number of hours to store the images (0 for infinite)
  Extension      Image file extension, usually ".jpg" or ".png"
  Timescale      Initial horizontal time scale (like 8h)

The tricky part is to obtain the URL from your camera. For some cameras you can get 
that from the manual, others you have to "hack": Display an image in your browser 
using the camera's internal web interface, inspect the source code of your web page 
and you get the URL. For AXIS cameras I use, the URL is typically

http://<name>/axis-cgi/jpg/image.cgi

For the Netatmo cameras I have at home (which I used during development in my home 
office), the procedure is more complicated, but you can google it. The logger is 
now linked against the CURL library to fetch images, so it also support https://. 
If libcurl is not installed on your system, the image history functionality will be 
disabled.

I tested the system for a few days now and it seem stable, which however does not 
mean that it is bug-free. So please report back any issue. The change is committed 
to the current develop branch.

I hope this extension helps all those people who are forced to do more remote 
monitoring of experiment during these times.

Best,
Stefan
Entry  28 May 2020, Marius Koeppel, Suggestion, ODB++ API - documantion updates and odb view after key creation test_odb_api.cu
Hello everybody,

I am really appreciate the development of the new odb++ API. So I directly started to rewrite the code for the Mu3e DAQ system.

I have a view questions / suggestions which came up during my work so fare:

1. The documentation seems to be quite new so there are some variables wrong named and small typo stuff. I would like to fix them. Should I request for an account or what else is needed to change them?

2. When I create an ODB structure with the new API I do for example:

    midas::odb stream_settings = {
            {"Test_odb_api", {
                                      {"Divider", 1000},     // int
                                      {"Enable", false},     // bool
                              }},
    };
    stream_settings.connect("/Equipment/Test/Settings", true);

and with 

midas::odb datagen("/Equipment/Test/Settings/Test_odb_api");
std::cout << "Datagenerator Enable is " << datagen["Enable"] << std::endl;

I am getting back false. Which looks nice but when I look into the odb via the browser the value is actually "y" meaning true which is stange. I added my frontend where I cleaned all function leaving only the frontend_init() one where I create this key. Its a cuda program but since I clean everything no cuda function is called anymore.

Thank you again for the nice development!

Cheers,
Marius 
    Reply  28 May 2020, Stefan Ritt, Suggestion, ODB++ API - documantion updates and odb view after key creation 
> 2. When I create an ODB structure with the new API I do for example:
> 
>     midas::odb stream_settings = {
>             {"Test_odb_api", {
>                                       {"Divider", 1000},     // int
>                                       {"Enable", false},     // bool
>                               }},
>     };
>     stream_settings.connect("/Equipment/Test/Settings", true);
> 
> and with 
> 
> midas::odb datagen("/Equipment/Test/Settings/Test_odb_api");
> std::cout << "Datagenerator Enable is " << datagen["Enable"] << std::endl;
> 
> I am getting back false. Which looks nice but when I look into the odb via the browser the value is actually "y" meaning true which is stange. I added my frontend where I cleaned all function leaving only the frontend_init() one where I create this key. Its a cuda program but since I clean everything no cuda function is called anymore.

I cannot confirm this behaviour. Just put following code in a standalone program:

cm_connect_experiment(NULL, NULL, "test", NULL);
   midas::odb::set_debug(true);

   midas::odb stream_settings = {
           {"Test_odb_api", {
               {"Divider", 1000},     // int
               {"Enable", false},     // bool
           }},
   };
   stream_settings.connect("/Equipment/Test/Settings", true);

   midas::odb datagen("/Equipment/Test/Settings/Test_odb_api");
   std::cout << "Datagenerator Enable is " << datagen["Enable"] << std::endl;

and run it. The result is:

...
Get ODB key "/Equipment/Test/Settings/Test_odb_api/Enable": false
Datagenerator Enable is Get ODB key "/Equipment/Test/Settings/Test_odb_api/Enable": false
false

Looking in the ODB, I also see 

[local:Online:S]/>cd Equipment/Test/Settings/Test_odb_api/
[local:Online:S]Test_odb_api>ls
Divider                         1000
Enable                          n
[local:Online:S]Test_odb_api>


So not sure what is different in your case. Are you looking to the same ODB? Maybe you have one remote, and local? 
Note that the "true" flag in stream_settings.connect(..., true); forces all default values into the ODB. 
So if the ODB value is "y", it will be cdhanged to "n".

Best,
Stefan
    Reply  30 May 2020, Stefan Ritt, Suggestion, ODB++ API - documantion updates and odb view after key creation 
Marius, has the problem been fixed in meantime?

Stefan

> I am getting back false. Which looks nice but when I look into the odb via the browser the value is actually "y" meaning true which is stange. 
> I added my frontend where I cleaned all function leaving only the frontend_init() one where I create this key. Its a cuda program but since 
> I clean everything no cuda function is called anymore.
       Reply  04 Jun 2020, Marius Koeppel, Suggestion, ODB++ API - documantion updates and odb view after key creation 
Hi Stefan,

your test program was only working for me after I changed the following lines inside the odbxx.cpp

diff --git a/src/odbxx.cxx b/src/odbxx.cxx
index 24b5a135..48edfd15 100644
--- a/src/odbxx.cxx
+++ b/src/odbxx.cxx
@@ -753,7 +753,12 @@ namespace midas {
          }
       } else {
          u_odb u = m_data[index];
-         status = db_set_data_index(m_hDB, m_hKey, &u, rpc_tid_size(m_tid), index, m_tid);
+         if (m_tid == TID_BOOL) {
+             BOOL ss = bool(u);
+             status = db_set_data_index(m_hDB, m_hKey, &ss, rpc_tid_size(m_tid), index, m_tid);
+         } else {
+             status = db_set_data_index(m_hDB, m_hKey, &u, rpc_tid_size(m_tid), index, m_tid);
+         }
          if (m_debug) {
             std::string s;
             u.get(s);

Likely not the best fix but otherwise I was always getting after running the test program:

[ODBEdit,INFO] Program ODBEdit on host localhost started
[local:Default:S]/>cd Equipment/Test/Settings/Test_odb_api/
key not found
makoeppe@office ~/mu3e/online/online (git)-[odb++_api] % test_connect
Created ODB key /Equipment/Test/Settings
Created ODB key /Equipment/Test/Settings/Test_odb_api
Created ODB key /Equipment/Test/Settings/Test_odb_api/Divider
Set ODB key "/Equipment/Test/Settings/Test_odb_api/Divider" = 1000
Created ODB key /Equipment/Test/Settings/Test_odb_api/Enable
Set ODB key "/Equipment/Test/Settings/Test_odb_api/Enable" = false
Get definition for ODB key "/Equipment/Test/Settings/Test_odb_api"
Get definition for ODB key "/Equipment/Test/Settings/Test_odb_api/Divider"
Get ODB key "/Equipment/Test/Settings/Test_odb_api/Divider": 1000
Get definition for ODB key "/Equipment/Test/Settings/Test_odb_api/Enable"
Get ODB key "/Equipment/Test/Settings/Test_odb_api/Enable": false
Get definition for ODB key "/Equipment/Test/Settings/Test_odb_api/Divider"
Get ODB key "/Equipment/Test/Settings/Test_odb_api/Divider": 1000
Get definition for ODB key "/Equipment/Test/Settings/Test_odb_api/Enable"
Get ODB key "/Equipment/Test/Settings/Test_odb_api/Enable": false
Datagenerator Enable is Get ODB key "/Equipment/Test/Settings/Test_odb_api/Enable": false
false
makoeppe@office ~/mu3e/online/online (git)-[odb++_api] % odbedit
[ODBEdit,INFO] Program ODBEdit on host localhost started
[local:Default:S]/>cd Equipment/Test/Settings/Test_odb_api/
[local:Default:S]Test_odb_api>ls
Divider                         1000
Enable                          y 

> > I am getting back false. Which looks nice but when I look into the odb via the browser the value is actually "y" meaning true which is stange. 
> > I added my frontend where I cleaned all function leaving only the frontend_init() one where I create this key. Its a cuda program but since 
> > I clean everything no cuda function is called anymore.
          Reply  05 Jun 2020, Stefan Ritt, Suggestion, ODB++ API - documantion updates and odb view after key creation 
Hi Marius,

your fix is good. Thanks for digging out this deep-lying issue, which would have haunted us if we would not fix it. 
The problem is that in midas, the "BOOL" type is 4 Bytes long, actually modelled after MS Windows. Now I realized
that in c++, the "bool" type is only 1 Byte wide. So if we do the memcopy from a "c++ bool" to a "MIDAS BOOL", we
always copy four bytes, meaning that we copy three Bytes beyond the one-byte value of the c++ bool. So your fix
is absolutely correct, and I added it in one more space where we deal with bool arrays, where we need the same.

What I don't understand however is the fact why this fails for you. The ODB values are stored in the C union under

union {
  ...
  bool m_bool;
  double m_double;
  std::string *m_string;
  ...
}

Now the C compiler puts all values at the lowest address, so m_bool is at offset zero, and the string pointer reaches
over all eight bytes (we are on 64-bit OS).

Now when I initialize this union in odbxx.h:66, I zero the string pointer which is the widest object:

  u_odb() : m_string{} {};

which (at least on my Mac) sets all eight bytes to zero. If I then use the wrong code to set the bool value to the ODB 
in odbxx.cxx:756, I do 

  db_set_data_index(... &u, rpc_tid_size(m_tid), ...);

so it copies four bytes (=rpc_tid_size(TID_BOOL)) to the ODB. The first byte should be the c++ bool value (0 or 1),
and the other three bytes should be zero from the initialization above. Apparently on your system, this is not
the case, and I would like you to double check it. Maybe there is another underlying problem which I don't understand
at the moment but which we better fix.

Otherwise the change is committed and your code should work. But we should not stop here! I really want to understand
why this is not working for you, maybe I miss something.

Best,
Stefan

> Hi Stefan,
> 
> your test program was only working for me after I changed the following lines inside the odbxx.cpp
> 
> diff --git a/src/odbxx.cxx b/src/odbxx.cxx
> index 24b5a135..48edfd15 100644
> --- a/src/odbxx.cxx
> +++ b/src/odbxx.cxx
> @@ -753,7 +753,12 @@ namespace midas {
>           }
>        } else {
>           u_odb u = m_data[index];
> -         status = db_set_data_index(m_hDB, m_hKey, &u, rpc_tid_size(m_tid), index, m_tid);
> +         if (m_tid == TID_BOOL) {
> +             BOOL ss = bool(u);
> +             status = db_set_data_index(m_hDB, m_hKey, &ss, rpc_tid_size(m_tid), index, m_tid);
> +         } else {
> +             status = db_set_data_index(m_hDB, m_hKey, &u, rpc_tid_size(m_tid), index, m_tid);
> +         }
>           if (m_debug) {
>              std::string s;
>              u.get(s);
> 
> Likely not the best fix but otherwise I was always getting after running the test program:
> 
> [ODBEdit,INFO] Program ODBEdit on host localhost started
> [local:Default:S]/>cd Equipment/Test/Settings/Test_odb_api/
> key not found
> makoeppe@office ~/mu3e/online/online (git)-[odb++_api] % test_connect
> Created ODB key /Equipment/Test/Settings
> Created ODB key /Equipment/Test/Settings/Test_odb_api
> Created ODB key /Equipment/Test/Settings/Test_odb_api/Divider
> Set ODB key "/Equipment/Test/Settings/Test_odb_api/Divider" = 1000
> Created ODB key /Equipment/Test/Settings/Test_odb_api/Enable
> Set ODB key "/Equipment/Test/Settings/Test_odb_api/Enable" = false
> Get definition for ODB key "/Equipment/Test/Settings/Test_odb_api"
> Get definition for ODB key "/Equipment/Test/Settings/Test_odb_api/Divider"
> Get ODB key "/Equipment/Test/Settings/Test_odb_api/Divider": 1000
> Get definition for ODB key "/Equipment/Test/Settings/Test_odb_api/Enable"
> Get ODB key "/Equipment/Test/Settings/Test_odb_api/Enable": false
> Get definition for ODB key "/Equipment/Test/Settings/Test_odb_api/Divider"
> Get ODB key "/Equipment/Test/Settings/Test_odb_api/Divider": 1000
> Get definition for ODB key "/Equipment/Test/Settings/Test_odb_api/Enable"
> Get ODB key "/Equipment/Test/Settings/Test_odb_api/Enable": false
> Datagenerator Enable is Get ODB key "/Equipment/Test/Settings/Test_odb_api/Enable": false
> false
> makoeppe@office ~/mu3e/online/online (git)-[odb++_api] % odbedit
> [ODBEdit,INFO] Program ODBEdit on host localhost started
> [local:Default:S]/>cd Equipment/Test/Settings/Test_odb_api/
> [local:Default:S]Test_odb_api>ls
> Divider                         1000
> Enable                          y 
> 
> > > I am getting back false. Which looks nice but when I look into the odb via the browser the value is actually "y" meaning true which is stange. 
> > > I added my frontend where I cleaned all function leaving only the frontend_init() one where I create this key. Its a cuda program but since 
> > > I clean everything no cuda function is called anymore.
             Reply  08 Jun 2020, Marius Koeppel, Suggestion, ODB++ API - documantion updates and odb view after key creation 
Hi Stefan,

I agree with your explanation about the size of BOOL and bool. 

I checked the program also on my Raspberry and there the old code works like on your mac. I don't really understand 
why the behavior is different for my system. The initializing of the union should also work for my system. 

At the moment I am using:

Arch Linux
Linux office 5.4.42-1-lts #1 SMP Wed, 20 May 2020 20:42:53 +0000 x86_64 GNU/Linux
gcc version 10.1.0 (GCC)

One thing which makes me a bit suspicious is that if I do:

u_odb u = m_data[index];
char dest[rpc_tid_size(m_tid)];
memcpy(dest, &u, rpc_tid_size(m_tid));

Clion tells me "Clang-Tidy: Undefined behavior, source object type 'midas::u_odb' is not TriviallyCopyable". 

I am not sure if this is the problem since I am not so familiar with this TriviallyCopyable. I need to further investigate here.

So fare the update from my side.

Cheers,
Marius

> Hi Marius,
> 
> your fix is good. Thanks for digging out this deep-lying issue, which would have haunted us if we would not fix it. 
> The problem is that in midas, the "BOOL" type is 4 Bytes long, actually modelled after MS Windows. Now I realized
> that in c++, the "bool" type is only 1 Byte wide. So if we do the memcopy from a "c++ bool" to a "MIDAS BOOL", we
> always copy four bytes, meaning that we copy three Bytes beyond the one-byte value of the c++ bool. So your fix
> is absolutely correct, and I added it in one more space where we deal with bool arrays, where we need the same.
> 
> What I don't understand however is the fact why this fails for you. The ODB values are stored in the C union under
> 
> union {
>   ...
>   bool m_bool;
>   double m_double;
>   std::string *m_string;
>   ...
> }
> 
> Now the C compiler puts all values at the lowest address, so m_bool is at offset zero, and the string pointer reaches
> over all eight bytes (we are on 64-bit OS).
> 
> Now when I initialize this union in odbxx.h:66, I zero the string pointer which is the widest object:
> 
>   u_odb() : m_string{} {};
> 
> which (at least on my Mac) sets all eight bytes to zero. If I then use the wrong code to set the bool value to the ODB 
> in odbxx.cxx:756, I do 
> 
>   db_set_data_index(... &u, rpc_tid_size(m_tid), ...);
> 
> so it copies four bytes (=rpc_tid_size(TID_BOOL)) to the ODB. The first byte should be the c++ bool value (0 or 1),
> and the other three bytes should be zero from the initialization above. Apparently on your system, this is not
> the case, and I would like you to double check it. Maybe there is another underlying problem which I don't understand
> at the moment but which we better fix.
> 
> Otherwise the change is committed and your code should work. But we should not stop here! I really want to understand
> why this is not working for you, maybe I miss something.
> 
> Best,
> Stefan
> 
> > Hi Stefan,
> > 
> > your test program was only working for me after I changed the following lines inside the odbxx.cpp
> > 
> > diff --git a/src/odbxx.cxx b/src/odbxx.cxx
> > index 24b5a135..48edfd15 100644
> > --- a/src/odbxx.cxx
> > +++ b/src/odbxx.cxx
> > @@ -753,7 +753,12 @@ namespace midas {
> >           }
> >        } else {
> >           u_odb u = m_data[index];
> > -         status = db_set_data_index(m_hDB, m_hKey, &u, rpc_tid_size(m_tid), index, m_tid);
> > +         if (m_tid == TID_BOOL) {
> > +             BOOL ss = bool(u);
> > +             status = db_set_data_index(m_hDB, m_hKey, &ss, rpc_tid_size(m_tid), index, m_tid);
> > +         } else {
> > +             status = db_set_data_index(m_hDB, m_hKey, &u, rpc_tid_size(m_tid), index, m_tid);
> > +         }
> >           if (m_debug) {
> >              std::string s;
> >              u.get(s);
> > 
> > Likely not the best fix but otherwise I was always getting after running the test program:
> > 
> > [ODBEdit,INFO] Program ODBEdit on host localhost started
> > [local:Default:S]/>cd Equipment/Test/Settings/Test_odb_api/
> > key not found
> > makoeppe@office ~/mu3e/online/online (git)-[odb++_api] % test_connect
> > Created ODB key /Equipment/Test/Settings
> > Created ODB key /Equipment/Test/Settings/Test_odb_api
> > Created ODB key /Equipment/Test/Settings/Test_odb_api/Divider
> > Set ODB key "/Equipment/Test/Settings/Test_odb_api/Divider" = 1000
> > Created ODB key /Equipment/Test/Settings/Test_odb_api/Enable
> > Set ODB key "/Equipment/Test/Settings/Test_odb_api/Enable" = false
> > Get definition for ODB key "/Equipment/Test/Settings/Test_odb_api"
> > Get definition for ODB key "/Equipment/Test/Settings/Test_odb_api/Divider"
> > Get ODB key "/Equipment/Test/Settings/Test_odb_api/Divider": 1000
> > Get definition for ODB key "/Equipment/Test/Settings/Test_odb_api/Enable"
> > Get ODB key "/Equipment/Test/Settings/Test_odb_api/Enable": false
> > Get definition for ODB key "/Equipment/Test/Settings/Test_odb_api/Divider"
> > Get ODB key "/Equipment/Test/Settings/Test_odb_api/Divider": 1000
> > Get definition for ODB key "/Equipment/Test/Settings/Test_odb_api/Enable"
> > Get ODB key "/Equipment/Test/Settings/Test_odb_api/Enable": false
> > Datagenerator Enable is Get ODB key "/Equipment/Test/Settings/Test_odb_api/Enable": false
> > false
> > makoeppe@office ~/mu3e/online/online (git)-[odb++_api] % odbedit
> > [ODBEdit,INFO] Program ODBEdit on host localhost started
> > [local:Default:S]/>cd Equipment/Test/Settings/Test_odb_api/
> > [local:Default:S]Test_odb_api>ls
> > Divider                         1000
> > Enable                          y 
> > 
> > > > I am getting back false. Which looks nice but when I look into the odb via the browser the value is actually "y" meaning true which is stange. 
> > > > I added my frontend where I cleaned all function leaving only the frontend_init() one where I create this key. Its a cuda program but since 
> > > > I clean everything no cuda function is called anymore.
                Reply  16 Jun 2020, Marius Koeppel, Suggestion, ODB++ API - documantion updates and odb view after key creation 
Hi Stefan,

I played around with the code a bit more and I found out that if I do:

midas::odb test_settings = {{"Enable", false}};
test_settings.connect("/Equipment/Test/Test", true);

The correct value ends up in the odb. In this case an u_odb instance is created
with a clean m_string. But if I run the other code an odb instanceo is created and
the values of m_data are set in

odbxx.h:
        odb(std::initializer_list<std::pair<const char *, midas::odb>> list) : odb() {...

This values are comming from u_odb instances since the code does:

odbxx.h:
        auto o = new midas::odb(element.second);

and then

odbxx.h:
        odb(T v):odb() {                                                                                                                                                                                                                                   
           m_num_values = 1;                                                                                                                                                                                                             
           m_data = new u_odb[1]{v};                                                                                                                                                                                                                
           m_tid = m_data[0].get_tid();                                                            
           m_data[0].set_parent(this);                                                         
        }

and looking at 

odbxx.h:
        u_odb(bool v) : m_bool{v}, m_tid{TID_BOOL}, m_parent_odb{nullptr} {};   

only m_bool is set for this instance meaning that only the first byte gets a value 
(still having only 1 byte for bool in c++). If I check m_string inside the u_odb::get function
 of this instance I am getting for a bool (I set false) stuff like 0x7f6633f67a00 and for an int 
(I set the int to 1000) 0x7f66000003e8. Since the size of BOOL is larger I am getting the 
wrong value. I checked this also on openSUSE having the same behavior.

Like you I am not getting this problem on my Mac. What compiler flags do you use on your Mac?

Cheers,
Marius
                   Reply  23 Jun 2020, Stefan Ritt, Suggestion, ODB++ API - documantion updates and odb view after key creation 
Hi Marius,

thanks for your help, you identified the problematic location. I changed that to

   u_odb(bool v) : m_tid{TID_BOOL}, m_parent_odb{nullptr} {m_string = nullptr; m_bool = v;};

which should initialize the full 8 bytes of the u_odb union. I committed to develop. Can you 
please give it a try?

Best,
Stefan


> and looking at 
> 
> odbxx.h:
>         u_odb(bool v) : m_bool{v}, m_tid{TID_BOOL}, m_parent_odb{nullptr} {};   
> 
> only m_bool is set for this instance meaning that only the first byte gets a value 
> (still having only 1 byte for bool in c++). If I check m_string inside the u_odb::get function
>  of this instance I am getting for a bool (I set false) stuff like 0x7f6633f67a00 and for an int 
> (I set the int to 1000) 0x7f66000003e8. Since the size of BOOL is larger I am getting the 
> wrong value. I checked this also on openSUSE having the same behavior.
                      Reply  24 Jun 2020, Marius Koeppel, Suggestion, ODB++ API - documantion updates and odb view after key creation 
Hi Stefan,

now everything works well (Tested on: OpenSuse and Arch Linux) :) 

Thank you for the fix.

Cheers,
Marius

> Hi Marius,
> 
> thanks for your help, you identified the problematic location. I changed that to
> 
>    u_odb(bool v) : m_tid{TID_BOOL}, m_parent_odb{nullptr} {m_string = nullptr; m_bool = v;};
> 
> which should initialize the full 8 bytes of the u_odb union. I committed to develop. Can you 
> please give it a try?
> 
> Best,
> Stefan
> 
> 
> > and looking at 
> > 
> > odbxx.h:
> >         u_odb(bool v) : m_bool{v}, m_tid{TID_BOOL}, m_parent_odb{nullptr} {};   
> > 
> > only m_bool is set for this instance meaning that only the first byte gets a value 
> > (still having only 1 byte for bool in c++). If I check m_string inside the u_odb::get function
> >  of this instance I am getting for a bool (I set false) stuff like 0x7f6633f67a00 and for an int 
> > (I set the int to 1000) 0x7f66000003e8. Since the size of BOOL is larger I am getting the 
> > wrong value. I checked this also on openSUSE having the same behavior.
Entry  19 Jun 2020, Isaac Labrie-Boulay, Info, Building/running a Frontend Task 
To build a frontend task, the user code and system code are compiled and linked 
together with the required libraries, by running a Makefile (e.g. 
../midas/examples/experiment/Makefile in the MIDAS package).

I tried building the CAMAC example frontend and I get this error:

g++: error: /home/rcmp/packages/midas/drivers/camac/ces8210.c: No such file or 
directory
g++: error: /home/rcmp/packages/midas/linux/lib/libmidas.a: No such file or 
directory
make: *** [camac_init.exe] Error 1

Obviously, I'm running the "make all" command from the camac directory. Why 
would I get this "no such file" error? Do I need to download the MIDAS packages 
inside my experiment directory?

Thanks for helping me out.

Isaac
Entry  18 Jun 2020, Ruslan Podviianiuk, Forum, ODB key length 
Hello,

I have a question about length of the name of ODB key.
Is it possible to create an ODB key containing more than 32 characters?

Thanks.
Ruslan
    Reply  18 Jun 2020, Stefan Ritt, Forum, ODB key length 
No. But if you need more than 32 characters, you do something wrong. The 
information you want to put into the ODB key name should probably be stored in 
another string key or so.

Stefan


> Hello,
> 
> I have a question about length of the name of ODB key.
> Is it possible to create an ODB key containing more than 32 characters?
> 
> Thanks.
> Ruslan
Entry  15 Jun 2020, Isaac Labrie Boulay, Bug Report, Killing and ODB - Removed ODB client because process pid does not exists 
Hey everyone,

When I run mhttpd I get the following error message:

[mhttpd,ERROR] [odb.cxx:1720:db_open_database,ERROR] Removed ODB client 
'mhttpd', index 0 because process pid 4531 does not exists
[mhttpd,INFO] Removed open record flag from "/Experiment/Security/RPC 
hosts/Allowed hosts"
[mhttpd,INFO] Removed exclusive access mode from "/Experiment/Security/RPC 
hosts/Allowed hosts"
[mhttpd,INFO] Removed open record flag from "/Experiment/Security/mhttpd 
hosts/Allowed hosts"
[mhttpd,INFO] Removed exclusive access mode from "/Experiment/Security/mhttpd 
hosts/Allowed hosts"
[mhttpd,INFO] Removed open record flag from "/Sequencer/State"
[mhttpd,INFO] Removed exclusive access mode from "/Sequencer/State"
[mhttpd,INFO] Corrected 3 ODB entries
[mhttpd,INFO] Deleted entry '/System/Clients/4531' for client 'mhttpd' because 
it is not connected to ODB
[mhttpd,INFO] Client 'mhttpd' on buffer 'SYSMSG' removed by bm_open_buffer 
because process pid 4531 does not exist
Mongoose web server will not use password protection
mongoose web server is listening on the HTTP port 8080

So mhttpd works as I have access to it through my browser but mlogger does not 
work when I try running it (Alarm: Program Logger is not running). I've 
managed to get mlogger working before and I think that the problem might be 
from maybe having another instance of ODB running without me knowing. 

Has anyone ever had this issue?

Thanks so much for your time.

Isaac
Entry  15 Jun 2020, Martin Mueller, Bug Report, deprecated function stime() 
Hi

I had a problem with the compilation of midas after an OS update to the recent version of OpenSuse tumbleweed. The function stime() in system.cxx:3196 is no longer available. 

In the documentation it is also marked as deprecated with the suggestion to use clock_settime instead:
https://man7.org/linux/man-pages/man2/stime.2.html

replacing system.cxx:3196 with the clock_settime - method in system.cxx:3200 - 3204 also for OS_UNIX seems to solve the problem, but i'm not sure if this will cause problems on older OS's.

Martin
    Reply  15 Jun 2020, Stefan Ritt, Bug Report, deprecated function stime() 
The function stime() has been replaced by clock_settime() on Feb. 2020:

https://bitbucket.org/tmidas/midas/commits/c732120e7c68bbcdbbc6236c1fe894c401d9bbbd

Please always pull before submitting bug reports.

Best,
Stefan
Entry  09 Jun 2020, Isaac Labrie Boulay, Info, Preparing the VME hardware - VME address jumpers. VME_address_jumpers_broken_link.PNG
Hey folks,

I'm currently working on setting up a MIDAS experiment and I am following the 
"Setup MIDAS experiment at Triumf" page on 
MidasWiki(https://midas.triumf.ca/MidasWiki/index.php/Setup_MIDAS_experiment_at_
TRIUMF).

The 3rd line of the hardware checklist under the "Prepare VME hardware section" 
has a link to a page that doesn't exit anymore, I'm trying to figure out how to 
setup the VME address jumpers on the VME modules.

Does anyone know how to setup the VME modules? Or, can anyone send me a link to 
instructions?

Thanks a lot for your time.

Isaac
    Reply  10 Jun 2020, Konstantin Olchanski, Info, Preparing the VME hardware - VME address jumpers. 
Hi, if you are not using any VME hardware, then you have no VME address jumpers to 
set. https://en.wikipedia.org/wiki/VMEbus

K.O.
       Reply  12 Jun 2020, Isaac Labrie Boulay, Info, Preparing the VME hardware - VME address jumpers. 
> Hi, if you are not using any VME hardware, then you have no VME address jumpers to 
> set. https://en.wikipedia.org/wiki/VMEbus
> 
> K.O.

Hi thanks for taking the time to help me out. I am using a VME-MWS in this experiment.

Let me know what you think.

Isaac
Entry  10 Jun 2020, Ivo Schulthess, Forum, slow-control equipment crashes when running multi-threaded on a remote machine 
Dear all

To reduce the time needed by Midas between runs, we want to change some of our periodic equipment to multi-threaded slow-control equipment. To do that I wanted to start from 
the slowcont with the multi/hv class driver and the nulldev device driver and null bus driver. The example runs fine as it is on the local midas machine and also on remote 
machines. When adding the DF_MULTITHREAD flag to the device driver list, it does not run anymore on remote machines but aborts with the following assertion:

scfe: /home/neutron/packages/midas/src/midas.cxx:1569: INT cm_get_path(char*, int): Assertion `_path_name.length() > 0' failed.

Running the frontend with GDB and set a breakpoint at the exit leads to the following: 

(gdb) where
#0  0x00007ffff68d599f in raise () from /lib64/libc.so.6
#1  0x00007ffff68bfcf5 in abort () from /lib64/libc.so.6
#2  0x00007ffff68bfbc9 in __assert_fail_base.cold.0 () from /lib64/libc.so.6
#3  0x00007ffff68cde56 in __assert_fail () from /lib64/libc.so.6
#4  0x000000000041efbf in cm_get_path (path=0x7fffffffd060 "P\373g", path_size=256)
    at /home/neutron/packages/midas/src/midas.cxx:1563
#5  cm_get_path (path=path@entry=0x7fffffffd060 "P\373g", path_size=path_size@entry=256)
    at /home/neutron/packages/midas/src/midas.cxx:1563
#6  0x0000000000453dd8 in ss_semaphore_create (name=name@entry=0x7fffffffd2c0 "DD_Input", 
    semaphore_handle=semaphore_handle@entry=0x67f700 <multi_driver+96>)
    at /home/neutron/packages/midas/src/system.cxx:2340
#7  0x0000000000451d25 in device_driver (device_drv=0x67f6a0 <multi_driver>, cmd=<optimized out>)
    at /home/neutron/packages/midas/src/device_driver.cxx:155
#8  0x00000000004175f8 in multi_init(eqpmnt*) ()
#9  0x00000000004185c8 in cd_multi(int, eqpmnt*) ()
#10 0x000000000041c20c in initialize_equipment () at /home/neutron/packages/midas/src/mfe.cxx:827
#11 0x000000000040da60 in main (argc=1, argv=0x7fffffffda48)
    at /home/neutron/packages/midas/src/mfe.cxx:2757

I also tried to use the generic class driver which results in the same. I am not sure if this is a problem of the multi-threaded frontend running on a remote machine or is it 
something of our system which is not properly set up. Anyway I am running out of ideas how to solve this and would appreciate any input. 

Thanks in advance,
Ivo
    Reply  10 Jun 2020, Konstantin Olchanski, Forum, slow-control equipment crashes when running multi-threaded on a remote machine 
Yes, it is supposed to crash. On a remote frontend, cm_get_path() cannot be used
(we are on a different computer, all filesystems maybe no the same!) and is actually not set and
triggers a trap if something tries to use it. (this is the crash you see).

The caller to cm_get_path() is a MIDAS semaphore function.

And I think there is a mistake here. It is unusual for the driver framework to use a semaphore. For multithreaded
protection inside the frontend, a mutex would normally be used. (and mutexes do not use cm_get_path(), so
all is good). But if a semaphore is used, than all frontends running on the same computer become
serialized across the locked section. This is the right thing to do if you have multiple frontends
sharing the same hardware, i.e. a /dev/ttyUSB serial line, but why would a generic framework function
do this. I am not sure, I will have to take a look at why there is a semaphore and what it is locking/protecting.

(In midas, semaphores are normally used to protect global memory, such as ODB, or global resources, such as alarms,
against concurrent access by multiple programs, but of course that does not work for remote frontends,
they are on a different computer! semaphores only work locally, do not work across the network!)

K.O.

> 
> scfe: /home/neutron/packages/midas/src/midas.cxx:1569: INT cm_get_path(char*, int): Assertion `_path_name.length() > 0' failed.
> 
> Running the frontend with GDB and set a breakpoint at the exit leads to the following: 
> 
> (gdb) where
> #0  0x00007ffff68d599f in raise () from /lib64/libc.so.6
> #1  0x00007ffff68bfcf5 in abort () from /lib64/libc.so.6
> #2  0x00007ffff68bfbc9 in __assert_fail_base.cold.0 () from /lib64/libc.so.6
> #3  0x00007ffff68cde56 in __assert_fail () from /lib64/libc.so.6
> #4  0x000000000041efbf in cm_get_path (path=0x7fffffffd060 "P\373g", path_size=256)
>     at /home/neutron/packages/midas/src/midas.cxx:1563
> #5  cm_get_path (path=path@entry=0x7fffffffd060 "P\373g", path_size=path_size@entry=256)
>     at /home/neutron/packages/midas/src/midas.cxx:1563
> #6  0x0000000000453dd8 in ss_semaphore_create (name=name@entry=0x7fffffffd2c0 "DD_Input", 
>     semaphore_handle=semaphore_handle@entry=0x67f700 <multi_driver+96>)
>     at /home/neutron/packages/midas/src/system.cxx:2340
> #7  0x0000000000451d25 in device_driver (device_drv=0x67f6a0 <multi_driver>, cmd=<optimized out>)
>     at /home/neutron/packages/midas/src/device_driver.cxx:155
> #8  0x00000000004175f8 in multi_init(eqpmnt*) ()
> #9  0x00000000004185c8 in cd_multi(int, eqpmnt*) ()
> #10 0x000000000041c20c in initialize_equipment () at /home/neutron/packages/midas/src/mfe.cxx:827
> #11 0x000000000040da60 in main (argc=1, argv=0x7fffffffda48)
>     at /home/neutron/packages/midas/src/mfe.cxx:2757
> 
> I also tried to use the generic class driver which results in the same. I am not sure if this is a problem of the multi-threaded frontend running on a remote machine or is it 
> something of our system which is not properly set up. Anyway I am running out of ideas how to solve this and would appreciate any input. 
> 
> Thanks in advance,
> Ivo
       Reply  10 Jun 2020, Stefan Ritt, Forum, slow-control equipment crashes when running multi-threaded on a remote machine 
Few comments:

- As KO write, we might need semaphores also on a remote front-end, in case several programs share the same hardware. So it should work and cm_get_path() should not just exit

- When I wrote the multi-threaded device drivers, I did use semaphores instead of mutexes, but I forgot why. Might be that midas semaphores have a timeout and mutexes not, or 
something along those lines.

- I do need either semaphores or mutexes since in a multi-threaded slow-control font-end (too many dashes...) several threads have to access an internal data exchange buffer, which 
needs protection for multi-threaded environments.

So we can how either fix cm_get_path() or replace all semaphores in with mutexes in midas/src/device_driver.cxx. I have kind of a feeling that we should do both. And what about 
switching to c++ std::mutex instead of pthread mutexes?

Stefan
          Reply  12 Jun 2020, Ivo Schulthess, Forum, slow-control equipment crashes when running multi-threaded on a remote machine 
Thanks you two once again for the very fast answers. I tested the example on the local machine and it works perfectly fine. In the meantime I also created two new drivers for our devices 
and everything works with them, the improvement in time is significant and I will create drivers for all our devices where possible. If they are in a working state I can also provide 
them to add to the Midas drivers. Of course if it would be possible to run the front-end also on our remote machines this would be even better. I am not experienced in any multi-threaded 
programming but if I can provide any help or input, please let me know. 

Have a great weekend,
Ivo
Entry  04 Jun 2020, Lars Martin, Bug Report, midasodb.cxx RBA appends instead of replacing 
I am on branch develop and use the tmfe frontends. I found that a bool vector 
gets bigger every time I read it from the ODB.

Turns out in midasodb.cxx (as of commit 813f696, lines 478ff) the output vector 
"value" gets appended to without resizing.

Since after line 474 xvalue.size()==value.size() it would make sense to simply 
replace value->push_back() with value[i]= .
Entry  30 May 2020, Gennaro Tortone, Bug Report, wrong run number 
Hi,
I build MIDAS and ROOTANA using same tag (midas-2020-03-a, rootana-2020-03-a):

if I build examples in ROOTANA I got wrong run number (always 0):

[root@lxgentor examples]# ./ana.exe -r9090

Using THttpServer in read/write mode
TMidasOnline::connect: Connecting to experiment "exo" on host 
"lxgentor.na.infn.it"
MVOdb::SetMidasStatus: Error: MIDAS db_get_value() at ODB path "//runinfo/Run 
number" returned status 312
Opened output file with name : output00000000.root
TDT724Waveform done init...... 
Create Histos
Create Histos
TMidasOnline::eventRequest: Event request: buffer "SYSTEM" (2), event id 
0xffffffff, trigger mask 0xffffffff, sample 2, request id: 0

it seems that some function try to get "//runinfo/Run number" (double slash) 
instead of "/runinfo/Run number"...

Thanks in advance,
Gennaro
    Reply  30 May 2020, Thomas Lindner, Bug Report, wrong run number 
Hi,

I fixed this particular case, so that I now I get the run number correctly.

But Konstantin will need to explain how this class is supposed to be used more generally.  The example programs have a mix with sometimes needing leading slashes and other times not:

Thomass-MacBook-Pro-3:rootana lindner$ grep -s 'runinfo/Run' */*.c*
libAnalyzer/TRootanaEventLoop.cxx:   fODB->RI("runinfo/Run number", &fCurrentRunNumber);
manalyzer/manalyzer.cxx:   int run_number = midas->odbReadInt("/runinfo/Run number");
manalyzer/manalyzer_v0.cxx:   int run_number = midas->odbReadInt("/runinfo/Run number");
old_analyzer/analyzer.cxx:   gOdb->RI("runinfo/Run number", &gRunNumber);

Cheers,
Thomas

> 
> Hi,
> I build MIDAS and ROOTANA using same tag (midas-2020-03-a, rootana-2020-03-a):
> 
> if I build examples in ROOTANA I got wrong run number (always 0):
> 
> [root@lxgentor examples]# ./ana.exe -r9090
> 
> Using THttpServer in read/write mode
> TMidasOnline::connect: Connecting to experiment "exo" on host 
> "lxgentor.na.infn.it"
> MVOdb::SetMidasStatus: Error: MIDAS db_get_value() at ODB path "//runinfo/Run 
> number" returned status 312
> Opened output file with name : output00000000.root
> TDT724Waveform done init...... 
> Create Histos
> Create Histos
> TMidasOnline::eventRequest: Event request: buffer "SYSTEM" (2), event id 
> 0xffffffff, trigger mask 0xffffffff, sample 2, request id: 0
> 
> it seems that some function try to get "//runinfo/Run number" (double slash) 
> instead of "/runinfo/Run number"...
> 
> Thanks in advance,
> Gennaro
       Reply  30 May 2020, Gennaro Tortone, Bug Report, wrong run number 
Hi,

thanks a lot for your grep... I temporary fix my local ROOTANA code with this:

diff --git a/libAnalyzer/TRootanaEventLoop.cxx b/libAnalyzer/TRootanaEventLoop.cxx
index 57111b6..90cf384 100644
--- a/libAnalyzer/TRootanaEventLoop.cxx
+++ b/libAnalyzer/TRootanaEventLoop.cxx
@@ -733,7 +733,7 @@ int TRootanaEventLoop::ProcessMidasOnline(TApplication*app, const char* hostname
    /* fill present run parameters */
 
    fCurrentRunNumber = 0;
-   fODB->RI("/runinfo/Run number", &fCurrentRunNumber);
+   fODB->RI("runinfo/Run number", &fCurrentRunNumber);
 
    //   if ((fODB->odbReadInt("/runinfo/State") == 3))
    //startRun(0,gRunNumber,0);

Regards,
Gennaro

> Hi,
> 
> I fixed this particular case, so that I now I get the run number correctly.
> 
> But Konstantin will need to explain how this class is supposed to be used more generally.  The example programs have a mix with sometimes needing leading slashes and other times not:
> 
> Thomass-MacBook-Pro-3:rootana lindner$ grep -s 'runinfo/Run' */*.c*
> libAnalyzer/TRootanaEventLoop.cxx:   fODB->RI("runinfo/Run number", &fCurrentRunNumber);
> manalyzer/manalyzer.cxx:   int run_number = midas->odbReadInt("/runinfo/Run number");
> manalyzer/manalyzer_v0.cxx:   int run_number = midas->odbReadInt("/runinfo/Run number");
> old_analyzer/analyzer.cxx:   gOdb->RI("runinfo/Run number", &gRunNumber);
> 
> Cheers,
> Thomas
> 
> > 
> > Hi,
> > I build MIDAS and ROOTANA using same tag (midas-2020-03-a, rootana-2020-03-a):
> > 
> > if I build examples in ROOTANA I got wrong run number (always 0):
> > 
> > [root@lxgentor examples]# ./ana.exe -r9090
> > 
> > Using THttpServer in read/write mode
> > TMidasOnline::connect: Connecting to experiment "exo" on host 
> > "lxgentor.na.infn.it"
> > MVOdb::SetMidasStatus: Error: MIDAS db_get_value() at ODB path "//runinfo/Run 
> > number" returned status 312
> > Opened output file with name : output00000000.root
> > TDT724Waveform done init...... 
> > Create Histos
> > Create Histos
> > TMidasOnline::eventRequest: Event request: buffer "SYSTEM" (2), event id 
> > 0xffffffff, trigger mask 0xffffffff, sample 2, request id: 0
> > 
> > it seems that some function try to get "//runinfo/Run number" (double slash) 
> > instead of "/runinfo/Run number"...
> > 
> > Thanks in advance,
> > Gennaro
       Reply  03 Jun 2020, Konstantin Olchanski, Bug Report, wrong run number 
> 
> But Konstantin will need to explain how this class is supposed to be used more generally.
>

MVOdb is a replacement for VirtualOdb. It has many functions that were missing in VirtualOdb
and it implements access to both XML and JSON ODB dumps.

>  The example programs have a mix with sometimes needing leading slashes and other times not:
> 
> libAnalyzer/TRootanaEventLoop.cxx:   fODB->RI("runinfo/Run number", &fCurrentRunNumber);
> old_analyzer/analyzer.cxx:   gOdb->RI("runinfo/Run number", &gRunNumber);

RI() is MVOdb, no absolute paths, leading "/" not permitted.

> manalyzer/manalyzer.cxx:   int run_number = midas->odbReadInt("/runinfo/Run number");
> manalyzer/manalyzer_v0.cxx:   int run_number = midas->odbReadInt("/runinfo/Run number");

Hmmm... good catch. these are VirtualOdb calls, but they bypass the VirtualOdb interface (which was removed)
and call the odb access methods directly from the TMidasOnline class. They should be replaced
with MVOdb RI() calls (and leading "/" removed).

I was going to look at the TMidasOnline class next - many things need to be updated,
but it will have to wait until I update the MVOdb and the tmfe documentation and until
I update midasio to read and write the new bank32a data files.

K.O.
    Reply  03 Jun 2020, Konstantin Olchanski, Bug Report, wrong run number 
> I build MIDAS and ROOTANA using same tag (midas-2020-03-a, rootana-2020-03-a):
>
> MVOdb::SetMidasStatus: Error: MIDAS db_get_value() at ODB path "//runinfo/Run 
> number" returned status 312
>
> it seems that some function try to get "//runinfo/Run number" (double slash) 
> instead of "/runinfo/Run number"...
> 

You made a mistake somewhere.

rootana release rootana-2020-03 uses VirtualOdb, not MVOdb, so there should be no 
messages from "MVOdb". ODB path "/runinfo/run number" is correct for the 
VirtualOdb classes. MVOdb classes use relative paths, absolute path starting from 
"/" is not permitted, hence the error.

You most likely are using the master branch of rootana.

Commit switching rootana from VirtualOdb to mvodb was made after the release 2020-
03, in May:
https://bitbucket.org/tmidas/rootana/commits/522cd07181c59f557e9ef13a70223ec44be44
bc9

(I confirm the incorrect call to RI("/runinfo/..."), Thomas already fixed it in 
the repository, big thanks!).

The dust is not fully settled yet on the refactoring of rootana, until then, I 
recommend that people use the release version(s).

K.O.
       Reply  03 Jun 2020, Gennaro Tortone, Bug Report, wrong run number 
> > I build MIDAS and ROOTANA using same tag (midas-2020-03-a, rootana-2020-03-a):
> >
> > MVOdb::SetMidasStatus: Error: MIDAS db_get_value() at ODB path "//runinfo/Run 
> > number" returned status 312
> >
> > it seems that some function try to get "//runinfo/Run number" (double slash) 
> > instead of "/runinfo/Run number"...
> > 
>
> You made a mistake somewhere.

you are right !
I used rootana-2020-03-a instead of release/rootana-2020-03... my fault !

I have to (re)compile MIDAS for the same error...

Thanks !
Gennaro

> 
> rootana release rootana-2020-03 uses VirtualOdb, not MVOdb, so there should be no 
> messages from "MVOdb". ODB path "/runinfo/run number" is correct for the 
> VirtualOdb classes. MVOdb classes use relative paths, absolute path starting from 
> "/" is not permitted, hence the error.
> 
> You most likely are using the master branch of rootana.
> 
> Commit switching rootana from VirtualOdb to mvodb was made after the release 2020-
> 03, in May:
> https://bitbucket.org/tmidas/rootana/commits/522cd07181c59f557e9ef13a70223ec44be44
> bc9
> 
> (I confirm the incorrect call to RI("/runinfo/..."), Thomas already fixed it in 
> the repository, big thanks!).
> 
> The dust is not fully settled yet on the refactoring of rootana, until then, I 
> recommend that people use the release version(s).
> 
> K.O.
          Reply  04 Jun 2020, Konstantin Olchanski, Bug Report, wrong run number 
> > You made a mistake somewhere.
> 
> you are right !
> I used rootana-2020-03-a instead of release/rootana-2020-03... my fault !
> 
> I have to (re)compile MIDAS for the same error...

The MIDAS version, including what branch you have used is reported on the midas "help" page and in 
the odbedit "version" command.

For example my midas reports:
Tue Mar 24 20:54:11 2020 -0700 - midas-2020-03-a-98-g8b462cc9 on branch develop

This version string includes:
date of commit
git tag and commit number (see "git describe")
"-dirty" if you have modified sources ("git status" shows modified files)
and which git branch you have (I have "develop", you should have "release/midas-2020-03")

I am not sure how ROOTANA reports the version and build strings. I shall check..O...

K.O.
Entry  04 Jun 2020, Lukas Gerritzen, , stime() deprecated in glibc 2.31 
In glibc 2.31, the stime function was deprecated:

* The obsolete function stime is no longer available to newly linked
  binaries, and its declaration has been removed from <time.h>.
  Programs that set the system time should use clock_settime instead.

https://sourceware.org/legacy-ml/libc-announce/2020/msg00001.html

This creates a problem in src/system.cxx:3197:4
Entry  04 Jun 2020, Hisataka YOSHIDA, Forum, Template of slow control frontend 
I’m beginner of Midas, and trying to develop the slow control front-end with the latest Midas.
I found the scfe.cxx in the “example”, but not enough to refer to write the front-end for my own devices 
because it contains only nulldevice and null bus driver case...
(I could have succeeded to run the HV front-end for ISEG MPod, because there is the device driver...)

Can I get some frontend examples such as simple TCP/IP and/or RS232 devices?
Hopefully, I would like to have examples of frontend and device driver.
(if any device driver which is included in the package is similar, please tell me.)

Thanks a lot.
    Reply  04 Jun 2020, Pintaudi Giorgio, Forum, Template of slow control frontend MIDAS_frontend_sample.zip
> I’m beginner of Midas, and trying to develop the slow control front-end with the latest Midas.
> I found the scfe.cxx in the “example”, but not enough to refer to write the front-end for my own devices 
> because it contains only nulldevice and null bus driver case...
> (I could have succeeded to run the HV front-end for ISEG MPod, because there is the device driver...)
> 
> Can I get some frontend examples such as simple TCP/IP and/or RS232 devices?
> Hopefully, I would like to have examples of frontend and device driver.
> (if any device driver which is included in the package is similar, please tell me.)
> 
> Thanks a lot.

Dear Yoshida-san,
my name is Giorgio and I am a Ph.D. student working on the T2K experiment.
I had to write many MIDAS frontends recently, so I think that my code could be of some help to you.

As you might already know, the MIDAS slow control system is structured into three layers/levels.

 - The highest layer is the "class" layer that directly interfaces with the user and the ODB. It is called 
"class" layer because it refers to a class of devices (for example all the high voltage power supplies, 
etc...). The idea is that in the same experiment you can have many different models of power supplies but 
all of them can be controlled with a single class driver.

 - Then there is the "device" layer that implements the functions specific to the particular device.

 - Finally, there is the "BUS" layer that directly communicates with the device. The BUS can be Ethernet 
(TCP/IP), Serial (RS-232 / RS-422 / RS-485), USB, etc ...

You can read more about the MIDAS slow control system here:
https://midas.triumf.ca/MidasWiki/index.php/Slow_Control_System

Anyway, you need to write code for all those layers. If you are lucky you can reuse some of the already 
existing MIDAS code. Keep in mind that all the examples that you find in the MIDAS documentation and the 
MIDAS source code are written in C (even if it is then compiled with g++). But, you can write a frontend in 
C++ without any problem so choose whichever language you are familiar the most with.

I am attaching an archive with some sample code directly taken from our experiment. It is just a small 
fraction of the code not meant to be compilable. The code is disclosed with the GPL3 license, so you can use 
it as you please but if you do, please cite my name and the WAGASCI-T2K experiment somewhere visible.

In the archive, you can find two example frontends with the respective drivers. The "Triggers" frontend is 
written in C++ (or C+ if you consider that the mfe.cxx API is very C-like). The "WaterLevel" frontend is 
written in plain C. The "Triggers" frontend controls our trigger board called CCC and the "WaterLevel" 
frontend controls our water level sensors called PicoLog 1012. They share a custom implementation of the 
TCP/IP bus. Anyway, this is not relevant to you. You may just want to take a look at the code structure.

Finally, recently there have been some very interesting developments regarding the ODB C++ API. I would 
definitely take a look at that. I wish I had that when I was developing these frontends.

Good luck

--
Pintaudi Giorgio, Ph.D. student
Neutrino and Particle Physics Minamino Laboratory
Faculty of Science and Engineering, Yokohama National University
giorgio.pintaudi.kx@ynu.jp
TEL +81(0)45-339-4182
       Reply  04 Jun 2020, Hisataka YOSHIDA, Forum, Template of slow control frontend 
Dear Giorgio,

Thank you very much for your kind and quick reply!

I appreciate you giving me such a nice explanation, experience, and great sample codes (This is what I desired!).
They all are useful for me. I will try to write my frontend codes using gift from you.

Thank you again!

Best regards,
Hisataka Yoshida
    Reply  04 Jun 2020, Stefan Ritt, Forum, Template of slow control frontend 
> I’m beginner of Midas, and trying to develop the slow control front-end with the latest Midas.
> I found the scfe.cxx in the “example”, but not enough to refer to write the front-end for my own devices 
> because it contains only nulldevice and null bus driver case...
> (I could have succeeded to run the HV front-end for ISEG MPod, because there is the device driver...)
> 
> Can I get some frontend examples such as simple TCP/IP and/or RS232 devices?
> Hopefully, I would like to have examples of frontend and device driver.
> (if any device driver which is included in the package is similar, please tell me.)

Have you checked the documentation?

https://midas.triumf.ca/MidasWiki/index.php/Slow_Control_System

Basically you have to replace the nulldevice driver with a "real" driver. You find all existing drivers under 
midas/drivers/device. If your favourite is not there, you have to write it. Use one which is close to the one 
you need and modify it.

Best,
Stefan
       Reply  04 Jun 2020, Hisataka YOSHIDA, Forum, Template of slow control frontend 
Dear Stefan,

Thank you for you quick reply.


> Have you checked the documentation?
> 
> https://midas.triumf.ca/MidasWiki/index.php/Slow_Control_System

Yes, I have read the wiki, but not easy to figure out how I treat the individual case.

> Basically you have to replace the nulldevice driver with a "real" driver. You find all existing drivers under 
> midas/drivers/device. If your favourite is not there, you have to write it. Use one which is close to the one 
> you need and modify it.

Okay, I will try to write drivers for my own devices using existing drivers.
(maybe I can find some device drivers which uses TCP/IP, RS232)

Best regards,
Hisataka Yoshida
Entry  24 Apr 2020, Pintaudi Giorgio, Forum, API to read MIDAS format file 
Dear MIDAS people,
I need to borrow your wisdom for a bit.
I am developing a piece of software that should read the history data stored in a
.midas file (MIDAS format) and integrate it into the WAGASCI data quality output.
In other words, I need to read some temperature values stored in a .midas file and
compare them with the MPPC gains and check for temperature/gain dependence.
I see three possibilities:
  • write a custom parser in C++ using the instructions contained in the Mhformat page;
  • call the mhist program from within my application;
  • call the mhdump program from within my application;
Which solution do you think is the best?
Because there is no need for raw performance, if possible, I would like to write my application in Python3 but C++ is also an option.
    Reply  24 Apr 2020, Stefan Ritt, Forum, API to read MIDAS format file 
I guess all three options would work. I just tried mhist and it still works with the "FILE" history

mhist -e <equipment name> -v <variable name> -h 10

for dumping a variable for the last 10 hours.

I could not get mhdump to work with current history files, maybe it only works with "MIDAS" history and not "FILE" history (see https://midas.triumf.ca/MidasWiki/index.php/History_System#History_drivers). Maybe Konstantin who wrote mhdump has some idea.

Writing your own parser is certainly possible (even in Python), but of course more work.

Stefan
       Reply  24 Apr 2020, Pintaudi Giorgio, Forum, API to read MIDAS format file 

Stefan Ritt wrote:
I guess all three options would work. I just tried mhist and it still works with the "FILE" history

mhist -e <equipment name> -v <variable name> -h 10

for dumping a variable for the last 10 hours.

I could not get mhdump to work with current history files, maybe it only works with "MIDAS" history and not "FILE" history (see https://midas.triumf.ca/MidasWiki/index.php/History_System#History_drivers). Maybe Konstantin who wrote mhdump has some idea.

Writing your own parser is certainly possible (even in Python), but of course more work.

Stefan


Dear Stefan,
thank you very much for the quick reply. Sorry if my message was not very clear, actually we are using the "MIDAS" history format and not the "FILE" one. So both mhist and mhdump should be ok (however I have only tested mhist).
Hypothetically which one between the two lends itself the better to being "batched"? I mean to be read and controlled by a program/routine. For example, some programs give the option to have the output formatted in json, etc...
          Reply  24 Apr 2020, Stefan Ritt, Forum, API to read MIDAS format file 

Pintaudi Giorgio wrote:

Hypothetically which one between the two lends itself the better to being "batched"? I mean to be read and controlled by a program/routine. For example, some programs give the option to have the output formatted in json, etc...


Can't say on the top of my head. Both program are pretty old (written well before JSON has been invented, so there is no support for that in both). mhist was written by me mhdump was written by Konstantin. I would both give a try and see what you like more.

Stefan
    Reply  25 Apr 2020, Konstantin Olchanski, Forum, API to read MIDAS format file 
<p>[quote=&quot;Pintaudi Giorgio&quot;]Dear MIDAS people, I need to borrow your 
wisdom for a bit. I am developing a piece of software that should read the history data 
stored in a [FONT=Times New Roman].midas[/FONT] file (MIDAS format) and integrate it 
into the WAGASCI data quality output. In other words, I need to read some temperature 
values stored in a [FONT=Times New Roman].midas[/FONT] file and compare them with 
the MPPC gains and check for temperature/gain dependence. I see three possibilities: 
[LIST] [*] write a custom parser in C++ using the instructions contained in the 
[URL=https://midas.triumf.ca/MidasWiki/index.php/Mhformat]Mhformat page[/URL]; [*] call 
the mhist program from within my application; [*] call the mhdump program from within my 
application; [/LIST] [B]Which solution do you think is the best?[/B] Because there is no 
need for raw performance, if possible, I would like to write my application in Python3 but 
C++ is also an option. [/quote]</p>

(Please write messages in plain text format, thank you)

The format of .hst midas history files is pretty simple and mhdump.cxx is an easy to read 
illustration on how to read it from basic principles (without going through the midas library, 
which can be somewhat complicated). The newer "FILE" format for history is even simpler 
to read because it is just fixed-record-size binary data prepended by a text header.

You can also use the mh2sql program to import history data into an sql database (mysql 
and sqlite should work) or to convert .hst files to "FILE" format files. This works well
for "archiving" history data, because the "FILE" format works better for looking at old data,
and for looking at data in "months" or "years" timescale.

Back to your question, you can certainly use "mhdump" as is, using a pipe (popen()), or 
you can package mhdump.cxx as a c++ class and use it in your application. If you go this 
route, your contribution of such a c++ class back to midas would be very welcome.

You can also use mhist, but the mhist code cannot be trivially packaged as a c++ class
to use in your application.

You can also suggest that we write an easier to use history utility, we are always open to 
suggested improvements.

Let us know how it works out for you. Good luck!

K.O.
       Reply  03 May 2020, Pintaudi Giorgio, Forum, API to read MIDAS format file 
> The format of .hst midas history files is pretty simple and mhdump.cxx is an easy to read 
> illustration on how to read it from basic principles (without going through the midas library, 
> which can be somewhat complicated). The newer "FILE" format for history is even simpler 
> to read because it is just fixed-record-size binary data prepended by a text header.
> 
> You can also use the mh2sql program to import history data into an sql database (mysql 
> and sqlite should work) or to convert .hst files to "FILE" format files. This works well
> for "archiving" history data, because the "FILE" format works better for looking at old data,
> and for looking at data in "months" or "years" timescale.
> 
> Back to your question, you can certainly use "mhdump" as is, using a pipe (popen()), or 
> you can package mhdump.cxx as a c++ class and use it in your application. If you go this 
> route, your contribution of such a c++ class back to midas would be very welcome.
> 
> You can also use mhist, but the mhist code cannot be trivially packaged as a c++ class
> to use in your application.
> 
> You can also suggest that we write an easier to use history utility, we are always open to 
> suggested improvements.
> 
> Let us know how it works out for you. Good luck!
> 
> K.O.

Dear Konstantin,
thank you very much for the wealth of information you provided.
I have thought about it and I see two options:

- One is to convert to SQL format and then use a SQLite library to import the data in my 
application.

- The other is to encapsulate the mhdump.cxx code into a C++ class, as you say.

I am leaning towards the first option for three reasons.
1. I have never used a SQLite database so it is a good learning opportunity for me.
2, The SQLite database format is very well known and widespread, so there are tons of tools to 
handle it
3. I have taken a look at the mhdump.cxx source code and I think it is a beautiful piece of code, 
but has a very "functional" taste with little encapsulation. Basically, all the fun is happening 
inside the readHstFile function and there is no trivial way to get the data out of it. I don't mean 
that it would be difficult to wrap it around a C++ class, but I feel that I can learn more by going 
the SQL way.

PS some time ago, I don't remember if you or Stefan, recommended CLion as C++ IDE. I have tried it 
(together with PyCharm) and I must admit that it is really good. It took me years to configure Emacs 
as a IDE, while it took me minutes to have much better results in CLion. Thank you very much for 
your recommendation.
          Reply  03 May 2020, Konstantin Olchanski, Forum, API to read MIDAS format file 
>
> - One is to convert to SQL format and then use a SQLite library to import the data in my 
> application.
> 

You can also configure midas to write history directly to an SQLITE database. I have not used
it recently, but it should still work. In terms of efficiency, sqlite file size is about the same
as .hst files. sqlite file and table naming is similar to the SQL and FILE implementation.

(But note that back when I implemented the SQLITE history writer, sqlite database corruption
recovery instructions were "delete the file, restore from backup". And indeed in every test
experiment I tried, the sqlite history databases eventually corrupted themselves. You see
same thing with google-chrome, lots of sqlite errors (bad locking, corrupted table, etc)
in it's terminal output).

>
> - The other is to encapsulate the mhdump.cxx code into a C++ class, as you say.
> 

If I were to write this today, there would be a c++ class that takes a history file,
iterates over all records and calls "callback" classlets. You can see this in the history.h
(HistoryBufferInterface) and in the tmfe.h (RpcHandlerInterface, etc).

I think this style of OO programming originally comes from java. If you so desire,
an "mhdump" class could be a nice way to learn it.

> 
> PS some time ago, I don't remember if you or Stefan, recommended CLion as C++ IDE. I have tried it 
> (together with PyCharm) and I must admit that it is really good. It took me years to configure Emacs 
> as a IDE, while it took me minutes to have much better results in CLion. Thank you very much for 
> your recommendation.
>

I remember, years ago, the Borland TurboC IDE was like a gift from Gods. But today, I think IDEs have
declined in quality and usefulness. They clog the screen with too much eye candy and fluff, use hard
to read fonts and silly colours, insist on using tabs where I want spaces, reformat the text even as I type it,
and detract from productive work with distracting popups ("try this new function!", "let's upgrade now!").

For serious programming, I use emacs with minimal decorations. I can easily open 3 or 4 windows at the same
time and still have enough screen space left for a terminal to run "make". And it is the only editor that can
edit the same file in two or more windows at the same time. You do not know you need this until
you work on odb.cxx.

K.O.
             Reply  04 May 2020, Pintaudi Giorgio, Forum, API to read MIDAS format file 
> (But note that back when I implemented the SQLITE history writer, sqlite database corruption
> recovery instructions were "delete the file, restore from backup". And indeed in every test
> experiment I tried, the sqlite history databases eventually corrupted themselves. You see
> same thing with google-chrome, lots of sqlite errors (bad locking, corrupted table, etc)
> in it's terminal output).

Thank you for the info. But I do not quite understand the comment above.
Do you mean that there is something wrong with the SQLite library itself or with the way that MIDAS creates the SQLite 
database?
          Reply  03 May 2020, Stefan Ritt, Forum, API to read MIDAS format file ReferenceCardForMac.pdf
> PS some time ago, I don't remember if you or Stefan, recommended CLion as C++ IDE. I have tried it 
> (together with PyCharm) and I must admit that it is really good. It took me years to configure Emacs 
> as a IDE, while it took me minutes to have much better results in CLion. Thank you very much for 
> your recommendation.

Was probably me. I use it as my standard IDE and am quite happy with it. All the things KO likes with emacs, plus much 
more. Especially the CMake integration is nice, since you don't have to leave the IDE for editing, compiling and debugging. 
The tooltips the IDE gave me in the past months made me write code much better. So quite an opposite opinion compared 
with KO, but luckily this planet has space for all kinds of opinions. I made myself the cheat sheet attached, which lets me 
do things much faster. Maybe you can use it.

Stefan
             Reply  26 May 2020, Pintaudi Giorgio, Forum, API to read MIDAS format file 
Eventually, I have settled for the SQLite format.
I could convert the MIDAS history files .hst to SQLite
database .sqlite3 using the utility mh2sql.
It worked out nicely, thank you for the advice.

However, as Konstantine predicted I did notice some
database corruption when a couple of problematic .hst
files were read. I solved the issue by just deleting
those .hst files (I think they were empty anyway).

Now I am developing a piece of code to read the
database using the SOCI library and integrate it
into a TTree but this is not relevant for MIDAS I think.

Thank you again for the discussion.
Entry  22 May 2020, Thomas Lindner, Bug Report, More trouble with openssl on macos 
For the record, here's my report of difficulties getting mongoose to compile with macos.  This is a similar 
problem reported before, but with slightly different error messages.  So I put them here for posterity.

Setup: 
- macos: 10.13.6
- xcode: 9.2
- gcc: Thomass-MacBook-Pro-3:build lindner$ gcc --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-
dir=/usr/include/c++/4.2.1
Apple LLVM version 9.0.0 (clang-900.0.39.2)
Target: x86_64-apple-darwin17.7.0
- midas: today's version

Start with the openssl version I had already installed.  cmake says 

-- Found OpenSSL: /usr/lib/libcrypto.dylib (found version "1.0.2s") 
-- MIDAS: Found OpenSSL version 1.0.2s

make install fails with this error:

[ 35%] Linking CXX executable mhttpd
cd /Users/lindner/packages/midas/build/progs && /usr/local/Cellar/cmake/3.15.0/bin/cmake -E 
cmake_link_script CMakeFiles/mhttpd.dir/link.txt --verbose=1
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++  -O2 -g -
DNDEBUG -Wl,-search_paths_first -Wl,-headerpad_max_install_names  CMakeFiles/mhttpd.dir/mhttpd.cxx.o 
CMakeFiles/mhttpd.dir/mongoose616.cxx.o CMakeFiles/mhttpd.dir/mgd.cxx.o 
CMakeFiles/mhttpd.dir/__/mscb/src/mscb.cxx.o  -o mhttpd ../libmidas.a /usr/lib/libssl.dylib 
/usr/lib/libcrypto.dylib -lz -lcurl -lsqlite3
Undefined symbols for architecture x86_64:
  "_SSL_CTX_set_psk_client_callback", referenced from:
      _mg_ssl_if_conn_init in mongoose616.cxx.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Use macports to upgrade to newer openssl

sudo port selfupdate
sudo port upgrade outdated

Now cmake says

-- Found OpenSSL: /usr/lib/libcrypto.dylib (found version "1.1.1g") 
-- MIDAS: Found OpenSSL version 1.1.1g

Error message is different now:

cd /Users/lindner/packages/midas/build/progs && /usr/local/Cellar/cmake/3.15.0/bin/cmake -E 
cmake_link_script CMakeFiles/mhttpd.dir/link.txt --verbose=1
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++  -O2 -g -
DNDEBUG -Wl,-search_paths_first -Wl,-headerpad_max_install_names  CMakeFiles/mhttpd.dir/mhttpd.cxx.o 
CMakeFiles/mhttpd.dir/mongoose616.cxx.o CMakeFiles/mhttpd.dir/mgd.cxx.o 
CMakeFiles/mhttpd.dir/__/mscb/src/mscb.cxx.o  -o mhttpd ../libmidas.a /usr/lib/libssl.dylib 
/usr/lib/libcrypto.dylib -lz -lcurl -lsqlite3
Undefined symbols for architecture x86_64:
  "_OPENSSL_init_ssl", referenced from:
      _mg_mgr_init_opt in mongoose616.cxx.o
      _mg_ssl_if_init in mongoose616.cxx.o
  "_SSL_CTX_set_options", referenced from:
      _mg_ssl_if_conn_init in mongoose616.cxx.o
  "_SSL_CTX_set_psk_client_callback", referenced from:
      _mg_ssl_if_conn_init in mongoose616.cxx.o
ld: symbol(s) not found for architecture x86_64

Fine.  Doing 'cmake -D NO_SSL=1 ..' to build still works fine; I will stick with that since I don't need SSL on my 
laptop.

Perhaps we should disable SSL by default on Macos?  People may only have ~50% chance of getting it to 
work.
  
    Reply  22 May 2020, Konstantin Olchanski, Bug Report, More trouble with openssl on macos 
> For the record, here's my report of difficulties getting mongoose to compile with macos. 
> -- MIDAS: Found OpenSSL version 1.0.2s
> -- MIDAS: Found OpenSSL version 1.1.1g
> ... [ all of them did not work ]

For the record, I get this on mac os 10.15.4 and it works.
-- MIDAS: Found OpenSSL version 1.1.1g

I think I am quite fed up with this openssl business, too.

What I will do in MIDAS is fix the mbedtls detection, add mbedtls instructions
in the documentation and remove openssl from mhttpd build.

Result will be:
- default build will have mhttpd without https support, and this works in 100% of our use cases at TRIUMF.
- if user do not want to use apache https proxy, they have to "git clone" mbedtls, build it, rebuild mhttpd, then
they get https support, but for https certificate management - getting them, renewing them, etc, they are
on their own.

Since mhttpd has no integration with certbot, automatic management of https certificates does not work,
so good luck again.

In theory, I can try to add certbot integration, but even the most basic tools are missing, for example, openssl
does not report certificate expiration dates (I guess I must write my own code to examine the certificate
and hope my idea of expiration matches their idea). Since I do not see certificate expiration dates, every day I could
blindly run "certbot renew" and restart openssl with the updated certificate (but I think openssl does
not have a "restart" function, so again, forget about it). Adding insult to injury, by default, certbot stores certificates
in a secret location in /etc where mhttpd cannot access them.

Bottom line is that powers-that-be messed up https certificate management and until that is sorted out and is easy
to integrate with custom web servers, I can only recommend that mhttpd must run behind the "OS support https proxy".

K.O.
ELOG V3.1.4-2e1708b5