Back Midas Rome Roody Rootana
  Midas DAQ System, Page 117 of 136  Not logged in ELOG logo
ID Date Authorup Topic Subject
  1880   24 Apr 2020 Stefan RittForumAPI to read MIDAS format file
I guess all three options would work. I just tried mhist and it still works with the "FILE" history

mhist -e <equipment name> -v <variable name> -h 10

for dumping a variable for the last 10 hours.

I could not get mhdump to work with current history files, maybe it only works with "MIDAS" history and not "FILE" history (see https://midas.triumf.ca/MidasWiki/index.php/History_System#History_drivers). Maybe Konstantin who wrote mhdump has some idea.

Writing your own parser is certainly possible (even in Python), but of course more work.

Stefan
  1883   24 Apr 2020 Stefan RittForumAPI to read MIDAS format file

Pintaudi Giorgio wrote:

Hypothetically which one between the two lends itself the better to being "batched"? I mean to be read and controlled by a program/routine. For example, some programs give the option to have the output formatted in json, etc...


Can't say on the top of my head. Both program are pretty old (written well before JSON has been invented, so there is no support for that in both). mhist was written by me mhdump was written by Konstantin. I would both give a try and see what you like more.

Stefan
  1889   26 Apr 2020 Stefan RittForumQuestions and discussions on the Frontend ODB tree structure.
Dear Yu Chen,

in my opinion, you can follow two strategies:

1) Follow the EQ_SLOW example from the distribution, and write some device driver for you hardware to control. There are dozens of experiments worldwide which use that scheme and it works ok for lots of 
different devices. By doing that, you get automatically things like write cache (only write a value to the actual device if it really has chande) and dead band (only write measured data to the ODB if it changes more 
than a certain value to suppress noise). Furthermore, you get events created automatically and history for all your measured variables. This scheme might look complicated on the first look, and it's quite old-
fashioned (no C++ yet) but it has proven stable over the last ~20 years or so. Maybe the biggest advantage of this system is that each device gets its own readout and control thread, so one slow device cannot 
block other devices. Once this has been introduced more than 10 years ago, we saw a big improvement in all experiments.

2) Do everything by yourself. This is the way you describe below. Of course you are free to do whatever you like, you will find "special" solutions for all your problems. But then you move away from the "standard 
scheme" and you loose all the benefits I described under 1) and of course you are on your own. If something does not work, it will be in your code and nobody from the community can help you.

So choose carefully between 1) and 2).

Best regards,
Stefan

> Dear MIDAS developers and colleagues,
> 
>     This is Yu CHEN of School of Physics, Sun Yat-sen University, China, working in the PandaX-III collaboration, an experiment under development to search the neutrinoless double 
> beta decay. We are working on the DAQ and slow control systems and would like to use Midas framework to communicate with the custom hardware systems, generally via Ethernet 
> interfaces. So currently we are focusing on the development of the FRONTEND program of Midas and have some questions and discussions on its ODB structure. Since I’m still not 
> experienced in the framework, it would be precious that you can provide some suggestions on the topic.
>     The current structure of the frontend ODB tree we have designed, together with our understanding on them, is as follows:
>       /Equipment/<frontend_name>/
>         ->  /Common/: Basic controls and left unchanged.
>         ->  /Variables/: (ODB records with MODE_READ) Monitored values that are AUTOMATICALLY updated from the bank data within each packed event. It is done by the update_odb() 
> function in mfe.cxx.
>         ->  /Statistics/: (ODB records with MODE_WRITE) Default status values that are AUTOMATICALLY updated by the calling of db_send_changed_records() within the mfe.cxx.
>         ->  /Settings/: All the user defined data can be located here. 
>             -> /stat/: (ODB records with MODE_WRITE) All the monitored values as well as program internal status. The update operation is done simultaneously when 
> db_send_changed_records() is called within the mfe.cxx.
>             -> /set/: (ODB records with MODE_READ) All the “Control” data used to configure the custom program and custom hardware.
> 
>     For our application, some of the our detector equipment outputs small amount of status and monitored data together with the event data, so we currently choose not to use EQ_SLOW 
> and 3-layer drivers for the readout. Our solution is to create two ODB sub-trees in the /Settings/ similar to what the device_driver does. However, this could introduce two 
> troubles:
>     1) For /Settings/stat/: To prevent the potential destroy on the hot-links of /Variables/ and /Statistics/ sub-trees, all our status and monitored data are stored separately in 
> the /Settings/stat/ sub-tree. Another consideration is that some monitored data are not directly from the raw data, so packaging them into the Bank for later showing in /Variables/ 
> could somehow lead to a complicated readout() function. However, this solution may be complicated for history loggings. I have find that the ANALYZER modules could provide some 
> processes for writing to the /Variables/ sub-tree, so I would like to know whether an analyzer should be used in this case.
>     2) For /Settings/set/: The “control” data (similar to the “demand” data in the EQ_SLOW equipment) are currently put in several /Settings/set/ sub-trees where each key in 
> them is hot-linked to a pre-defined hardware configuration function. However, some control operations are not related to a certain ODB key or related to several ODB keys (e.g. 
> configuration the Ethernet sockets), so the dispatcher function should be assigned to the whole sub-tree, which I think can slow the response speed of the software. What we are 
> currently using is to setup a dedicated “control key”, and then the input of different value means different operations (e.g. 1 means socket opening, 2 means sending the UDP 
> packets to the target hardware, et al.). This “control key” is also used to develop the buttons to be shown on the Status/Custom webpage. However, we would like to have your 
> suggestions or better solutions on that, considering the stability and fast response of the control.
> 
>     We are not sure whether the above understanding and troubles on the Midas framework are correct or they are just due to our limits on the knowledge of the framework, so we 
> really appreciate your knowledge and help for a better using on Midas. Thank you so much!
  1890   26 Apr 2020 Stefan RittInfoCLOCK_REALTIME on MacOS
> > > /Users/francesco/MIDAS/midas/src/system.cxx:3187:18: error: use of undeclared identifier 
> > > 'CLOCK_REALTIME'
> > >    clock_settime(CLOCK_REALTIME, &ltm);
> > > 
> > > Is it related to my (old) version of MacOS? Can I fix it somehow?
> 
> I think the "set clock" function is a holdover from embedded operating systems
> that did not keep track of clock time, i.e. VxWorks, and similar. Here a midas program
> will get the time from the mserver and set it on the local system. Poor man's ntp,
> poor man's ntpd/chronyd.
> 
> We should check if this function is called by anything, and if nothing calls it, maybe remove it?
> 
> K.O.

It's called in mfe.cxx via cm_synchronize:

/* set time from server */
#ifdef OS_VXWORKS
   cm_synchronize(NULL);
#endif

This was for old VxWorks systems which had no ntp/crond. Was asked for by Pierre long time ago. I don't use it 
(have no VxWorks). We can either remove it completely, or remove just the MacOSX part and just exit the program 
if called with an error message "not implemented on this OS".

Stefan
  1892   01 May 2020 Stefan RittForumTaking MIDAS beyond 64 clients
Hi Joseph,

here some thoughts from my side:

- Breaking ODB compatibility in the master/develop midas branch is very bad, since almost all experiments worldwide are affected if they just do blindly a pull and want to recompile and rerun. Currently, 
even during our Corona crisis, still some experiments are running and monitored remotely.

- On the other hand, if we have to break compatibility, now is maybe a good time since most accelerators worldwide are off. But before doing so, I would like to get feedback from the main experiments 
around the world (MEG, T2K, g-2, DEAP besides ALPHA).

- Having a maximum of 64 clients was originally decided when memory was scarce. In the early days one had just a couple of megabytes of share memory. Now this is not an issue any more, but I see 
another problem. The main status page gives a nice overview of the experiment. This only works because there is a limited number of midas clients and equipments. If we blow up to 1000+, the status 
page would be rather long and we have to scroll up and down forever. In such a scenario one would have at least to redesign the status and program pages. To start your experiment, you would have to 
click 1000 times to start each front-end, also not very practicable.

- Having 100's or 1000's of front-ends calls rather for a hierarchical design, like the LHC experiments have. That would be a major change of midas and cannot be done quickly. It would also result in 
much slower run start/stops.

- If you see limitations with your LabVIEW PCs, have you considered multi-threading on your front-ends? Note that the standard midas slow control system supports multithreaded devices 
(DF_MULTITHREAD). In MEG, we use about 800 microcontrollers via the MSCB protocol. They are grouped together and each group is a multithreaded device in the midas slow control lingo, meaning the 
group gets its own thread for control and readout in the midas frontend. This way, one group cannot slow down all other groups. There is one front-end for all groups, which can be started/stopped with 
a single click, it shows up just as one line in the status page, and still it's pretty fast. Have you considered such a scheme? So your LabVIEW PCs would not be individual front-ends, but just make a 
network connection to the midas front-end which then manages all LabVEIW PCs. The midas slow control system allows to define custom commands (besides the usual read/write command for slow 
control data), so you could maybe integrate all you need into that scheme.

Best,
Stefan

> 
> Hi all,
> I have been experimenting with a frontend solution for my experiment 
> (ALPHA). The intention to replace how we log data from PCs running LabVIEW. 
> I am at the proof of concept stage. So far I have some promising 
> performance, able to handle 10-100x more data in my test setup (current 
> limitations now are just network bandwith, MIDAS is impressively efficient). 
> ==========================================================================
> Our experiment has many PCs using LabVIEW which all log to MIDAS, the 
> experiment has grown such that we need some sort of load balancing in our 
> frontend.
> The concept was to have a 'supervisor frontend' and an array of 'worker 
> frontend' processes. 
> -A LabVIEW client would connect to the supervisor, then be referred to a 
> worker frontend for data logging. 
> -The supervisor could start a 'worker frontend' process as the demand 
> required.
> To increase accountability within the experiment, I intend to have a 'worker 
> frontend' per PC connecting. Then any rouge behavior would be clear from the 
> MIDAS frontpage.
> Presently there around 20-30 of these LabVIEW PCs, but given how the group 
> is growing, I want to be sure that my data logging solution will be viable 
> for the next 5-10 years. With the increased use of single board computers, I 
> chose the target of benchmarking upto 1000 worker frontends... but I quickly 
> hit the '64 MAX CLIENTS' and '64 RPC CONNECTION' limit. Ok...
> branching and updating these limits:
> https://bitbucket.org/tmidas/midas/branch/experimental-beyond_64_clients
> I have two commits. 
> 1. update the memory layout assertions and use MAX_CLIENTS as a variable
> https://bitbucket.org/tmidas/midas/commits/302ce33c77860825730ce48849cb810cf
> 366df96?at=experimental-beyond_64_clients
> 2. Change the MAX_CLIENTS and MAX_RPC_CONNECTION
> https://bitbucket.org/tmidas/midas/commits/f15642eea16102636b4a15c8411330969
> 6ce3df1?at=experimental-beyond_64_clients
> Unintended side effects:
> I break compatibility of existing ODB files... the database layout has 
> changed and I read my old ODB as corrupt. In my test setup I can start from 
> scratch but this would be horrible for any existing experiment.
> Edit: I noticed 'make testdiff' is failing... also fails lok
> Early performance results:
> In early tests, ~700 PCs logging 10 unique arrays of 10 doubles into 
> Equipment variables in the ODB seems to perform well... All transactions 
> from client PCs are finished within a couple of ms or less
> ==========================================================================
> Questions:
> Does the community here have strong opinions about increasing the 
> MAX_CLIENTS and MAX_RPC_CONNECTION limits? 
> Am I looking at this problem in a naive way?
> 
> Potential solutions other than increasing the MAX_CLIENTS limit:
> -Make worker threads inside the supervisor (not a separate process), I am 
> using TMFE, so I can dynamically create equipment. I have not yet taken a 
> deep dive into how any multithreading is implemented
> -One could have a round robin system to load balance between a limited pool 
> of 'worker frontend' proccesses. I don't like this solution as I want to 
> able to clearly see which client PCs have been setup to log too much data
> ==========================================================================
  1894   02 May 2020 Stefan RittForumTaking MIDAS beyond 64 clients
TRIUMF stayed quiet, probably they have other things to do.

I allowed myself to move the maximum number of clients back to its original value, in order not to break running experiments. 
This does not mean that the increase is a bad idea, we just have to be careful not to break running experiments. Let's discuss it
more thoroughly here before we make a decision in that direction.

Best regards,
Stefan
  1896   02 May 2020 Stefan RittForumTaking MIDAS beyond 64 clients
> Perhaps a item for future discussion would be for the odbinit program to be able to 'upgrade' the ODB and enable some backwards 
> compatibility.

We had this discussion already a few times. There is an ODB version number (DATABSE_VERSION 3 in midas.h) which is intended for that. If we break teh 
binary compatibility, programs should complain "ODB version has changed, please run ...", then odbinit (written by KO) should have a well-defined 
procedure to upgrade existing ODBs by re-creating them, but keeping all old contents. This should be tested on a few systems.

Stefan
  1903   03 May 2020 Stefan RittForumAPI to read MIDAS format file
> PS some time ago, I don't remember if you or Stefan, recommended CLion as C++ IDE. I have tried it 
> (together with PyCharm) and I must admit that it is really good. It took me years to configure Emacs 
> as a IDE, while it took me minutes to have much better results in CLion. Thank you very much for 
> your recommendation.

Was probably me. I use it as my standard IDE and am quite happy with it. All the things KO likes with emacs, plus much 
more. Especially the CMake integration is nice, since you don't have to leave the IDE for editing, compiling and debugging. 
The tooltips the IDE gave me in the past months made me write code much better. So quite an opposite opinion compared 
with KO, but luckily this planet has space for all kinds of opinions. I made myself the cheat sheet attached, which lets me 
do things much faster. Maybe you can use it.

Stefan
  1906   12 May 2020 Stefan RittInfoNew ODB++ API
Since the beginning of the lockdown I have been working hard on a new object-oriented interface to the online database ODB. I have the code now in an initial state where it is ready for 
testing and commenting. The basic idea is that there is an object midas::odb, which represents a value or a sub-tree in the ODB. Reading, writing and watching is done through this 
object. To get started, the new API has to be included with

   #include <odbxx.hxx>

To create ODB values under a certain sub-directory, you can either create one key at a time like:

   midas::odb o;
   o.connect("/Test/Settings", true);   // this creates /Test/Settings
   o.set_auto_create(true);            // this turns on auto-creation
   o["Int32 Key"] = 1;                 // create all these keys with different types
   o["Double Key"] = 1.23;
   o["String Key"] = "Hello";

or you can create a whole sub-tree at once like:

  midas::odb o = {
    {"Int32 Key", 1},
    {"Double Key", 1.23},
    {"String Key", "Hello"},
    {"Subdir", {
      {"Another value", 1.2f}
    }
  };
  o.connect("/Test/Settings");

To read and write to the ODB, just read and write to the odb object

   int i = o["Int32 Key];
   o["Int32 Key"] = 42;
   std::cout << o << std::endl;

This works with basic types, strings, std::array and std::vector. Each read access to this object triggers an underlying read from the ODB, and each write access triggers a write to the 
ODB. To watch a value for change in the odb (the old db_watch() function), you can use now c++ lambdas like:

   o.watch([](midas::odb &o) {
      std::cout << "Value of key \"" + o.get_full_path() + "\" changed to " << o << std::endl;
   });

Attached is a full running example, which is now also part of the midas repository. I have tested most things, but would not yet use it in a production environment. Not 100% sure if there 
are any memory leaks. If someone could valgrind the test program, I would appreciate (currently does not work on my Mac).

Have fun!

Stefan

  
  1908   13 May 2020 Stefan RittForumList of sequencer files
If you load a file into the sequencer from the web interface, you get a list of all files in that directory. 
This basically gives you a list of possible sequencer files. It's even more powerful, since you can 
create subdirectories and thus group the sequencer files. Attached an example from our 
experiment.

Stefan
  1914   20 May 2020 Stefan RittInfoNew ODB++ API
In meanwhile, there have been minor changes and improvements to the API:

Previously, we had:

>    midas::odb o;
>    o.connect("/Test/Settings", true);   // this creates /Test/Settings
>    o.set_auto_create(true);            // this turns on auto-creation
>    o["Int32 Key"] = 1;                 // create all these keys with different types
>    o["Double Key"] = 1.23;
>    o["String Key"] = "Hello";

Now, we only need:

      o.connect("/Test/Settings");
      o["Int32 Key"] = 1;                 // create all these keys with different types
      ...

no "true" needed any more. If the ODB tree does not exist, it gets created. Similarly, set_auto_create() can be dropped, it's on by default (thought this makes more sense). Also the iteration over subkeys has 
been changed slightly.

The full example attached has been updated accordingly. 

Best,
Stefan
  1922   28 May 2020 Stefan RittSuggestionODB++ API - documantion updates and odb view after key creation
> 2. When I create an ODB structure with the new API I do for example:
> 
>     midas::odb stream_settings = {
>             {"Test_odb_api", {
>                                       {"Divider", 1000},     // int
>                                       {"Enable", false},     // bool
>                               }},
>     };
>     stream_settings.connect("/Equipment/Test/Settings", true);
> 
> and with 
> 
> midas::odb datagen("/Equipment/Test/Settings/Test_odb_api");
> std::cout << "Datagenerator Enable is " << datagen["Enable"] << std::endl;
> 
> I am getting back false. Which looks nice but when I look into the odb via the browser the value is actually "y" meaning true which is stange. I added my frontend where I cleaned all function leaving only the frontend_init() one where I create this key. Its a cuda program but since I clean everything no cuda function is called anymore.

I cannot confirm this behaviour. Just put following code in a standalone program:

cm_connect_experiment(NULL, NULL, "test", NULL);
   midas::odb::set_debug(true);

   midas::odb stream_settings = {
           {"Test_odb_api", {
               {"Divider", 1000},     // int
               {"Enable", false},     // bool
           }},
   };
   stream_settings.connect("/Equipment/Test/Settings", true);

   midas::odb datagen("/Equipment/Test/Settings/Test_odb_api");
   std::cout << "Datagenerator Enable is " << datagen["Enable"] << std::endl;

and run it. The result is:

...
Get ODB key "/Equipment/Test/Settings/Test_odb_api/Enable": false
Datagenerator Enable is Get ODB key "/Equipment/Test/Settings/Test_odb_api/Enable": false
false

Looking in the ODB, I also see 

[local:Online:S]/>cd Equipment/Test/Settings/Test_odb_api/
[local:Online:S]Test_odb_api>ls
Divider                         1000
Enable                          n
[local:Online:S]Test_odb_api>


So not sure what is different in your case. Are you looking to the same ODB? Maybe you have one remote, and local? 
Note that the "true" flag in stream_settings.connect(..., true); forces all default values into the ODB. 
So if the ODB value is "y", it will be cdhanged to "n".

Best,
Stefan
  1924   30 May 2020 Stefan RittSuggestionODB++ API - documantion updates and odb view after key creation
Marius, has the problem been fixed in meantime?

Stefan

> I am getting back false. Which looks nice but when I look into the odb via the browser the value is actually "y" meaning true which is stange. 
> I added my frontend where I cleaned all function leaving only the frontend_init() one where I create this key. Its a cuda program but since 
> I clean everything no cuda function is called anymore.
  1932   04 Jun 2020 Stefan RittForumTemplate of slow control frontend
> I’m beginner of Midas, and trying to develop the slow control front-end with the latest Midas.
> I found the scfe.cxx in the “example”, but not enough to refer to write the front-end for my own devices 
> because it contains only nulldevice and null bus driver case...
> (I could have succeeded to run the HV front-end for ISEG MPod, because there is the device driver...)
> 
> Can I get some frontend examples such as simple TCP/IP and/or RS232 devices?
> Hopefully, I would like to have examples of frontend and device driver.
> (if any device driver which is included in the package is similar, please tell me.)

Have you checked the documentation?

https://midas.triumf.ca/MidasWiki/index.php/Slow_Control_System

Basically you have to replace the nulldevice driver with a "real" driver. You find all existing drivers under 
midas/drivers/device. If your favourite is not there, you have to write it. Use one which is close to the one 
you need and modify it.

Best,
Stefan
  1939   05 Jun 2020 Stefan RittSuggestionODB++ API - documantion updates and odb view after key creation
Hi Marius,

your fix is good. Thanks for digging out this deep-lying issue, which would have haunted us if we would not fix it. 
The problem is that in midas, the "BOOL" type is 4 Bytes long, actually modelled after MS Windows. Now I realized
that in c++, the "bool" type is only 1 Byte wide. So if we do the memcopy from a "c++ bool" to a "MIDAS BOOL", we
always copy four bytes, meaning that we copy three Bytes beyond the one-byte value of the c++ bool. So your fix
is absolutely correct, and I added it in one more space where we deal with bool arrays, where we need the same.

What I don't understand however is the fact why this fails for you. The ODB values are stored in the C union under

union {
  ...
  bool m_bool;
  double m_double;
  std::string *m_string;
  ...
}

Now the C compiler puts all values at the lowest address, so m_bool is at offset zero, and the string pointer reaches
over all eight bytes (we are on 64-bit OS).

Now when I initialize this union in odbxx.h:66, I zero the string pointer which is the widest object:

  u_odb() : m_string{} {};

which (at least on my Mac) sets all eight bytes to zero. If I then use the wrong code to set the bool value to the ODB 
in odbxx.cxx:756, I do 

  db_set_data_index(... &u, rpc_tid_size(m_tid), ...);

so it copies four bytes (=rpc_tid_size(TID_BOOL)) to the ODB. The first byte should be the c++ bool value (0 or 1),
and the other three bytes should be zero from the initialization above. Apparently on your system, this is not
the case, and I would like you to double check it. Maybe there is another underlying problem which I don't understand
at the moment but which we better fix.

Otherwise the change is committed and your code should work. But we should not stop here! I really want to understand
why this is not working for you, maybe I miss something.

Best,
Stefan

> Hi Stefan,
> 
> your test program was only working for me after I changed the following lines inside the odbxx.cpp
> 
> diff --git a/src/odbxx.cxx b/src/odbxx.cxx
> index 24b5a135..48edfd15 100644
> --- a/src/odbxx.cxx
> +++ b/src/odbxx.cxx
> @@ -753,7 +753,12 @@ namespace midas {
>           }
>        } else {
>           u_odb u = m_data[index];
> -         status = db_set_data_index(m_hDB, m_hKey, &u, rpc_tid_size(m_tid), index, m_tid);
> +         if (m_tid == TID_BOOL) {
> +             BOOL ss = bool(u);
> +             status = db_set_data_index(m_hDB, m_hKey, &ss, rpc_tid_size(m_tid), index, m_tid);
> +         } else {
> +             status = db_set_data_index(m_hDB, m_hKey, &u, rpc_tid_size(m_tid), index, m_tid);
> +         }
>           if (m_debug) {
>              std::string s;
>              u.get(s);
> 
> Likely not the best fix but otherwise I was always getting after running the test program:
> 
> [ODBEdit,INFO] Program ODBEdit on host localhost started
> [local:Default:S]/>cd Equipment/Test/Settings/Test_odb_api/
> key not found
> makoeppe@office ~/mu3e/online/online (git)-[odb++_api] % test_connect
> Created ODB key /Equipment/Test/Settings
> Created ODB key /Equipment/Test/Settings/Test_odb_api
> Created ODB key /Equipment/Test/Settings/Test_odb_api/Divider
> Set ODB key "/Equipment/Test/Settings/Test_odb_api/Divider" = 1000
> Created ODB key /Equipment/Test/Settings/Test_odb_api/Enable
> Set ODB key "/Equipment/Test/Settings/Test_odb_api/Enable" = false
> Get definition for ODB key "/Equipment/Test/Settings/Test_odb_api"
> Get definition for ODB key "/Equipment/Test/Settings/Test_odb_api/Divider"
> Get ODB key "/Equipment/Test/Settings/Test_odb_api/Divider": 1000
> Get definition for ODB key "/Equipment/Test/Settings/Test_odb_api/Enable"
> Get ODB key "/Equipment/Test/Settings/Test_odb_api/Enable": false
> Get definition for ODB key "/Equipment/Test/Settings/Test_odb_api/Divider"
> Get ODB key "/Equipment/Test/Settings/Test_odb_api/Divider": 1000
> Get definition for ODB key "/Equipment/Test/Settings/Test_odb_api/Enable"
> Get ODB key "/Equipment/Test/Settings/Test_odb_api/Enable": false
> Datagenerator Enable is Get ODB key "/Equipment/Test/Settings/Test_odb_api/Enable": false
> false
> makoeppe@office ~/mu3e/online/online (git)-[odb++_api] % odbedit
> [ODBEdit,INFO] Program ODBEdit on host localhost started
> [local:Default:S]/>cd Equipment/Test/Settings/Test_odb_api/
> [local:Default:S]Test_odb_api>ls
> Divider                         1000
> Enable                          y 
> 
> > > I am getting back false. Which looks nice but when I look into the odb via the browser the value is actually "y" meaning true which is stange. 
> > > I added my frontend where I cleaned all function leaving only the frontend_init() one where I create this key. Its a cuda program but since 
> > > I clean everything no cuda function is called anymore.
  1945   10 Jun 2020 Stefan RittForumslow-control equipment crashes when running multi-threaded on a remote machine
Few comments:

- As KO write, we might need semaphores also on a remote front-end, in case several programs share the same hardware. So it should work and cm_get_path() should not just exit

- When I wrote the multi-threaded device drivers, I did use semaphores instead of mutexes, but I forgot why. Might be that midas semaphores have a timeout and mutexes not, or 
something along those lines.

- I do need either semaphores or mutexes since in a multi-threaded slow-control font-end (too many dashes...) several threads have to access an internal data exchange buffer, which 
needs protection for multi-threaded environments.

So we can how either fix cm_get_path() or replace all semaphores in with mutexes in midas/src/device_driver.cxx. I have kind of a feeling that we should do both. And what about 
switching to c++ std::mutex instead of pthread mutexes?

Stefan
  1949   15 Jun 2020 Stefan RittBug Reportdeprecated function stime()
The function stime() has been replaced by clock_settime() on Feb. 2020:

https://bitbucket.org/tmidas/midas/commits/c732120e7c68bbcdbbc6236c1fe894c401d9bbbd

Please always pull before submitting bug reports.

Best,
Stefan
  1954   18 Jun 2020 Stefan RittForumODB key length
No. But if you need more than 32 characters, you do something wrong. The 
information you want to put into the ODB key name should probably be stored in 
another string key or so.

Stefan


> Hello,
> 
> I have a question about length of the name of ODB key.
> Is it possible to create an ODB key containing more than 32 characters?
> 
> Thanks.
> Ruslan
  1956   23 Jun 2020 Stefan RittSuggestionODB++ API - documantion updates and odb view after key creation
Hi Marius,

thanks for your help, you identified the problematic location. I changed that to

   u_odb(bool v) : m_tid{TID_BOOL}, m_parent_odb{nullptr} {m_string = nullptr; m_bool = v;};

which should initialize the full 8 bytes of the u_odb union. I committed to develop. Can you 
please give it a try?

Best,
Stefan


> and looking at 
> 
> odbxx.h:
>         u_odb(bool v) : m_bool{v}, m_tid{TID_BOOL}, m_parent_odb{nullptr} {};   
> 
> only m_bool is set for this instance meaning that only the first byte gets a value 
> (still having only 1 byte for bool in c++). If I check m_string inside the u_odb::get function
>  of this instance I am getting for a bool (I set false) stuff like 0x7f6633f67a00 and for an int 
> (I set the int to 1000) 0x7f66000003e8. Since the size of BOOL is larger I am getting the 
> wrong value. I checked this also on openSUSE having the same behavior.
  1958   24 Jun 2020 Stefan RittInfoNew image history system available
I'm happy to report that the Corona Lockdown in Europe also had some positive side 
effects: Finally I found time to implement an image history system in midas, 
something I wanted to do since many years, but never found time for that.

The idea is that you can incorporate any network-connected WebCam into the midas 
history system. You specify an update interval (like one minute) and the logger 
fetches regularly images from that webcam. The images are stored as raw files in 
the midas history directory, and can be retrieved via the web browser similarly to 
the "normal" history. Attached is an image from the MEG Experiment at PSI to give 
you some idea.

The cool thing now is that you can go "backwards" in time and browse all stored 
images. The buttons at each image allow you to step backward, forward, and play a 
movie of images, forward or backward. You can query for a certain date/time and 
download a specific image to your local disk. You can even synchronize all time 
axes, drag left and right on each image to see your experiment from different 
cameras at the same time stamps. You see a blue ribbon below each image which shows 
time stamps for which an image is available. 

Initially, only the most recent image is loaded to speed up loading time. As soon 
as you click on the image or one of the arrow buttons, previous images are loaded 
progressively, which you can see in the ribbon bar becoming blue. For slow internet 
connections this can take some time. For typical webcams and one minute update 
period you get typically a few GB per week.

To make this happen, you define a new ODB subtree 

/History/Images/<name>/
  Name:          Name of Camera
  Enabled:       Boolean to enable readout of camera
  URL            URL to fetch an image from the camera
  Period         Time period in seconds to fetch a new image
  Storage hours  Number of hours to store the images (0 for infinite)
  Extension      Image file extension, usually ".jpg" or ".png"
  Timescale      Initial horizontal time scale (like 8h)

The tricky part is to obtain the URL from your camera. For some cameras you can get 
that from the manual, others you have to "hack": Display an image in your browser 
using the camera's internal web interface, inspect the source code of your web page 
and you get the URL. For AXIS cameras I use, the URL is typically

http://<name>/axis-cgi/jpg/image.cgi

For the Netatmo cameras I have at home (which I used during development in my home 
office), the procedure is more complicated, but you can google it. The logger is 
now linked against the CURL library to fetch images, so it also support https://. 
If libcurl is not installed on your system, the image history functionality will be 
disabled.

I tested the system for a few days now and it seem stable, which however does not 
mean that it is bug-free. So please report back any issue. The change is committed 
to the current develop branch.

I hope this extension helps all those people who are forced to do more remote 
monitoring of experiment during these times.

Best,
Stefan
ELOG V3.1.4-2e1708b5