Back Midas Rome Roody Rootana
  Midas DAQ System, Page 69 of 154  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
ID Date Author Topic Subjectdown
  2144   09 Apr 2021 Lars MartinSuggestionTime zone selection for web page
The new history as well as the clock in the web page header show the local time 
of the user's computer running the browser.
Would it be possible to make it either always use the time zone of the Midas 
server, or make it selectable from the config page?
It's not ideal trying to relate error messages from the midas.log to history 
plots if the time stamps don't match.
  2152   14 Apr 2021 Stefan RittSuggestionTime zone selection for web page
> The new history as well as the clock in the web page header show the local time 
> of the user's computer running the browser.
> Would it be possible to make it either always use the time zone of the Midas 
> server, or make it selectable from the config page?
> It's not ideal trying to relate error messages from the midas.log to history 
> plots if the time stamps don't match.

I implemented a new row in the config page to select the time zone. 

"Local": Time zone where the browser runs
"Server": Time zone where the midas server runs (you have to update mhttpd for that)
"UTC+X": Any other time zone

The setting affects both the status header and the history display.

I spent quite some time with "named" time zones like "PST" "EST" "CEST", but the 
support for that is not that great in JavaScript, so I decided to go with simple 
UTC+X. Hope that's ok.

Please give it a try and let me know if it's working for you.

Best,
Stefan
Attachment 1: Screenshot_2021-04-14_at_16.54.12_.png
Screenshot_2021-04-14_at_16.54.12_.png
  2157   29 Apr 2021 Pierre-Andre AmaudruzSuggestionTime zone selection for web page
> > The new history as well as the clock in the web page header show the local time 
> > of the user's computer running the browser.
> > Would it be possible to make it either always use the time zone of the Midas 
> > server, or make it selectable from the config page?
> > It's not ideal trying to relate error messages from the midas.log to history 
> > plots if the time stamps don't match.
> 
> I implemented a new row in the config page to select the time zone. 
> 
> "Local": Time zone where the browser runs
> "Server": Time zone where the midas server runs (you have to update mhttpd for that)
> "UTC+X": Any other time zone
> 
> The setting affects both the status header and the history display.
> 
> I spent quite some time with "named" time zones like "PST" "EST" "CEST", but the 
> support for that is not that great in JavaScript, so I decided to go with simple 
> UTC+X. Hope that's ok.
> 
> Please give it a try and let me know if it's working for you.
> 
> Best,
> Stefan

Hi Stefan,

This is great, the UTC+x is perfect, thank you.
PAA
  2132   23 Mar 2021 Lars MartinBug ReportTime shift in history CSV export
Version: release/midas-2020-12

I'm exporting the history data shown in elog:2132/1 to CSV, but when I look at the 
CSV data, the step no longer occurs at the same time in both data sets (elog:2132/2)
Attachment 1: Cooling-MoxaCalib-20212118-190450-20212119-102151.png
Cooling-MoxaCalib-20212118-190450-20212119-102151.png
Attachment 2: Screenshot_from_2021-03-23_12-29-21.png
Screenshot_from_2021-03-23_12-29-21.png
  2133   23 Mar 2021 Lars MartinBug ReportTime shift in history CSV export
History is from two separate equipments/frontends, but both have "Log history" set to 1.
  2134   23 Mar 2021 Lars MartinBug ReportTime shift in history CSV export
Tried with export of two different time ranges, and the shift appears to remain the same, 
about 4040 rows.
  2135   24 Mar 2021 Stefan RittBug ReportTime shift in history CSV export
I confirm there is a problem. If variables are from the same equipment, they have the same 
time stamps, like

t1 v1(t1) v2(t1)
t2 v1(t2) v2(t2)
t3 v1(t3) v2(t3)

when they are from different equipments, they have however different time stamps

t1 v1(t1)
t2    v2(t2)
t3 v1(t3)
t4    v2(t4)

The bug in the current code is that all variables use the time stamps of the first variable, 
which is wrong in the case of different equipments, like

t1 v1(t1) v2(*t2*)
t3 v1(t3) v2(*t4*)

So I can change the code, but I'm not sure what would be the bast way. The easiest would be to 
export one array per variable, like

t1 v1(t1)
t2 v1(t2)
...
t3 v2(t3)
t4 v2(t4)
...

Putting that into a single array would leave gaps, like

t1 v1(t1) [gap]
t2 [gap]  v2(t2)
t3 v1(t3) [gap]
t4 [ga]]  v2(t4)

plus this is programmatically more complicated, since I have to merge two arrays. So which 
export format would you prefer?

Stefan
  2136   24 Mar 2021 Lars MartinBug ReportTime shift in history CSV export
I think from my perspective the separate files are fine. I personally don't really like the format 
with the gaps, so don't see an advantage in putting in the extra work.
I'm surprised the shift is this big, though, it was more than a whole hour in my case, is it the 
time difference between when the frontends were started?
  2154   14 Apr 2021 Stefan RittBug ReportTime shift in history CSV export
I finally found some time to fix this issue in the latest commit. Please update and check if it's 
working for you.

Stefan
  594   15 Jun 2009 Jimmy NgaiForumTime limit of each run
Dear All,

Can one set a time limit for each run? I can only find event limit in ODB. 
Thanks.

Jimmy
  615   04 Aug 2009 Exaos LeeForumThe contents of the attachment
As requested from K.O., I paste the "00README.txt" as the following:
#-*- mode: outline -*-
#-*- encoding: utf-8 -*-
#AUTHOR: Exaos Lee <Exaos DOT Lee AT gmail DOT com>

* Directories
  +--> 00README.txt : This file
  |
  +--> bustester : Directory contains utilities for VME bus testing
  |
  +--> modules   : APIs to handle VME modules
  |
  +--> pyutil    : Uitilies in Python, including PyMVME
  |
  +--> sis3100   : Provide lib_sis3100mvme.a/so using with "mvmestd.h"

* Utilities in Python

** PyMVME module

   The module "PyMVME" provides the following stuff:
      a. class StdVME
      	 -- contains standard VME informations.
      b. class MVME_INTERFACE
      	 -- the C structure MVME_INTERFACE wrapped in Python
      c. dict MVME_STATUS
      	 -- the return information defined in "mvmestd.h"
      d. the related useful aliases from "mvmestd.h"
      	 -- including "mvme_addr_t", "mvme_locaddr_t", "mvme_size_t"
      e. class MvmeDev
      	 -- the major class which provides methods to access VME bus.

   You may find examples of how to use module "PyMVME" from "find_caen.py" or
   scripts in dir "test". All of the examples are using "lib_sis3100mvme.so".
   You may find information later in this introduction.

** find_caen.py

   The script to find VME modules from CAEN. Now, it is still in test status
   and can only find ADCs, TDCs or QDCs.

* SIS3100 library to be used togather with "mvmestd.h"

  The directory "sis3100" contains sources to build libraries as the following:
  a. lib_sis3100.a     -- APIs declared in "sis3100_vme_calls.h"
  b. lib_sis3100mvme.a -- APIs declared in "mvmestd.h". It also contains the
     		       	  same APIs from lib_sis3100.a

  If you want to use shared libraries, especially when you are using utilities
  wrote in Python, you may rebuild the libraries as the following:

    $ cd sis3100
    $ make shared

* APIs to handle VME modules

** vadc_caen.h/c

   Provides APIs to handle ADC-type modules from CAEN, including:
      a. ADCs --- V785, V785N
      b. TDCs --- V775, V775N
      c. QDCs --- V792, V792N

* VME bus testers

  Still under development.


  1930   04 Jun 2020 Hisataka YOSHIDAForumTemplate of slow control frontend
I’m beginner of Midas, and trying to develop the slow control front-end with the latest Midas.
I found the scfe.cxx in the “example”, but not enough to refer to write the front-end for my own devices 
because it contains only nulldevice and null bus driver case...
(I could have succeeded to run the HV front-end for ISEG MPod, because there is the device driver...)

Can I get some frontend examples such as simple TCP/IP and/or RS232 devices?
Hopefully, I would like to have examples of frontend and device driver.
(if any device driver which is included in the package is similar, please tell me.)

Thanks a lot.
  1931   04 Jun 2020 Pintaudi GiorgioForumTemplate of slow control frontend
> I’m beginner of Midas, and trying to develop the slow control front-end with the latest Midas.
> I found the scfe.cxx in the “example”, but not enough to refer to write the front-end for my own devices 
> because it contains only nulldevice and null bus driver case...
> (I could have succeeded to run the HV front-end for ISEG MPod, because there is the device driver...)
> 
> Can I get some frontend examples such as simple TCP/IP and/or RS232 devices?
> Hopefully, I would like to have examples of frontend and device driver.
> (if any device driver which is included in the package is similar, please tell me.)
> 
> Thanks a lot.

Dear Yoshida-san,
my name is Giorgio and I am a Ph.D. student working on the T2K experiment.
I had to write many MIDAS frontends recently, so I think that my code could be of some help to you.

As you might already know, the MIDAS slow control system is structured into three layers/levels.

 - The highest layer is the "class" layer that directly interfaces with the user and the ODB. It is called 
"class" layer because it refers to a class of devices (for example all the high voltage power supplies, 
etc...). The idea is that in the same experiment you can have many different models of power supplies but 
all of them can be controlled with a single class driver.

 - Then there is the "device" layer that implements the functions specific to the particular device.

 - Finally, there is the "BUS" layer that directly communicates with the device. The BUS can be Ethernet 
(TCP/IP), Serial (RS-232 / RS-422 / RS-485), USB, etc ...

You can read more about the MIDAS slow control system here:
https://midas.triumf.ca/MidasWiki/index.php/Slow_Control_System

Anyway, you need to write code for all those layers. If you are lucky you can reuse some of the already 
existing MIDAS code. Keep in mind that all the examples that you find in the MIDAS documentation and the 
MIDAS source code are written in C (even if it is then compiled with g++). But, you can write a frontend in 
C++ without any problem so choose whichever language you are familiar the most with.

I am attaching an archive with some sample code directly taken from our experiment. It is just a small 
fraction of the code not meant to be compilable. The code is disclosed with the GPL3 license, so you can use 
it as you please but if you do, please cite my name and the WAGASCI-T2K experiment somewhere visible.

In the archive, you can find two example frontends with the respective drivers. The "Triggers" frontend is 
written in C++ (or C+ if you consider that the mfe.cxx API is very C-like). The "WaterLevel" frontend is 
written in plain C. The "Triggers" frontend controls our trigger board called CCC and the "WaterLevel" 
frontend controls our water level sensors called PicoLog 1012. They share a custom implementation of the 
TCP/IP bus. Anyway, this is not relevant to you. You may just want to take a look at the code structure.

Finally, recently there have been some very interesting developments regarding the ODB C++ API. I would 
definitely take a look at that. I wish I had that when I was developing these frontends.

Good luck

--
Pintaudi Giorgio, Ph.D. student
Neutrino and Particle Physics Minamino Laboratory
Faculty of Science and Engineering, Yokohama National University
giorgio.pintaudi.kx@ynu.jp
TEL +81(0)45-339-4182
Attachment 1: MIDAS_frontend_sample.zip
  1932   04 Jun 2020 Stefan RittForumTemplate of slow control frontend
> I’m beginner of Midas, and trying to develop the slow control front-end with the latest Midas.
> I found the scfe.cxx in the “example”, but not enough to refer to write the front-end for my own devices 
> because it contains only nulldevice and null bus driver case...
> (I could have succeeded to run the HV front-end for ISEG MPod, because there is the device driver...)
> 
> Can I get some frontend examples such as simple TCP/IP and/or RS232 devices?
> Hopefully, I would like to have examples of frontend and device driver.
> (if any device driver which is included in the package is similar, please tell me.)

Have you checked the documentation?

https://midas.triumf.ca/MidasWiki/index.php/Slow_Control_System

Basically you have to replace the nulldevice driver with a "real" driver. You find all existing drivers under 
midas/drivers/device. If your favourite is not there, you have to write it. Use one which is close to the one 
you need and modify it.

Best,
Stefan
  1933   04 Jun 2020 Hisataka YOSHIDAForumTemplate of slow control frontend
Dear Stefan,

Thank you for you quick reply.


> Have you checked the documentation?
> 
> https://midas.triumf.ca/MidasWiki/index.php/Slow_Control_System

Yes, I have read the wiki, but not easy to figure out how I treat the individual case.

> Basically you have to replace the nulldevice driver with a "real" driver. You find all existing drivers under 
> midas/drivers/device. If your favourite is not there, you have to write it. Use one which is close to the one 
> you need and modify it.

Okay, I will try to write drivers for my own devices using existing drivers.
(maybe I can find some device drivers which uses TCP/IP, RS232)

Best regards,
Hisataka Yoshida
  1934   04 Jun 2020 Hisataka YOSHIDAForumTemplate of slow control frontend
Dear Giorgio,

Thank you very much for your kind and quick reply!

I appreciate you giving me such a nice explanation, experience, and great sample codes (This is what I desired!).
They all are useful for me. I will try to write my frontend codes using gift from you.

Thank you again!

Best regards,
Hisataka Yoshida
  145   12 Jun 2003 Pierre-André Amaudruz Tape handling
- remove ss_tape_get_blockn from lazylogger.c
- add ss_tape_get_blockn to system.c
- add ss_tape_get_blockn prototype into midas.h
- fix buffer size for "dir" in mtape.c
- add block# for "dir" in mtape if command successful.
- handle TID_STRUCT bank type by display as 8bit in ybos.c (mdump)
  1891   01 May 2020 Joseph McKennaForumTaking MIDAS beyond 64 clients

Hi all,

I have been experimenting with a frontend solution for my experiment 
(ALPHA). The intention to replace how we log data from PCs running LabVIEW. 

I am at the proof of concept stage. So far I have some promising 
performance, able to handle 10-100x more data in my test setup (current 
limitations now are just network bandwith, MIDAS is impressively efficient). 

==========================================================================

Our experiment has many PCs using LabVIEW which all log to MIDAS, the 
experiment has grown such that we need some sort of load balancing in our 
frontend.

The concept was to have a 'supervisor frontend' and an array of 'worker 
frontend' processes. 
-A LabVIEW client would connect to the supervisor, then be referred to a 
worker frontend for data logging. 
-The supervisor could start a 'worker frontend' process as the demand 
required.

To increase accountability within the experiment, I intend to have a 'worker 
frontend' per PC connecting. Then any rouge behavior would be clear from the 
MIDAS frontpage.

Presently there around 20-30 of these LabVIEW PCs, but given how the group 
is growing, I want to be sure that my data logging solution will be viable 
for the next 5-10 years. With the increased use of single board computers, I 
chose the target of benchmarking upto 1000 worker frontends... but I quickly 
hit the '64 MAX CLIENTS' and '64 RPC CONNECTION' limit. Ok...

branching and updating these limits:
https://bitbucket.org/tmidas/midas/branch/experimental-beyond_64_clients

I have two commits. 
1. update the memory layout assertions and use MAX_CLIENTS as a variable
https://bitbucket.org/tmidas/midas/commits/302ce33c77860825730ce48849cb810cf
366df96?at=experimental-beyond_64_clients
2. Change the MAX_CLIENTS and MAX_RPC_CONNECTION
https://bitbucket.org/tmidas/midas/commits/f15642eea16102636b4a15c8411330969
6ce3df1?at=experimental-beyond_64_clients

Unintended side effects:
I break compatibility of existing ODB files... the database layout has 
changed and I read my old ODB as corrupt. In my test setup I can start from 
scratch but this would be horrible for any existing experiment.

Edit: I noticed 'make testdiff' pipeline is failing... also fails locally... 
investigating

Early performance results:
In early tests, ~700 PCs logging 10 unique arrays of 10 doubles into 
Equipment variables in the ODB seems to perform well... All transactions 
from client PCs are finished within a couple of ms or less

==========================================================================

Questions:

Does the community here have strong opinions about increasing the 
MAX_CLIENTS and MAX_RPC_CONNECTION limits? 
Am I looking at this problem in a naive way?


Potential solutions other than increasing the MAX_CLIENTS limit:
-Make worker threads inside the supervisor (not a separate process), I am 
using TMFE, so I can dynamically create equipment. I have not yet taken a 
deep dive into how any multithreading is implemented
-One could have a round robin system to load balance between a limited pool 
of 'worker frontend' proccesses. I don't like this solution as I want to 
able to clearly see which client PCs have been setup to log too much data

==========================================================================
  1892   01 May 2020 Stefan RittForumTaking MIDAS beyond 64 clients
Hi Joseph,

here some thoughts from my side:

- Breaking ODB compatibility in the master/develop midas branch is very bad, since almost all experiments worldwide are affected if they just do blindly a pull and want to recompile and rerun. Currently, 
even during our Corona crisis, still some experiments are running and monitored remotely.

- On the other hand, if we have to break compatibility, now is maybe a good time since most accelerators worldwide are off. But before doing so, I would like to get feedback from the main experiments 
around the world (MEG, T2K, g-2, DEAP besides ALPHA).

- Having a maximum of 64 clients was originally decided when memory was scarce. In the early days one had just a couple of megabytes of share memory. Now this is not an issue any more, but I see 
another problem. The main status page gives a nice overview of the experiment. This only works because there is a limited number of midas clients and equipments. If we blow up to 1000+, the status 
page would be rather long and we have to scroll up and down forever. In such a scenario one would have at least to redesign the status and program pages. To start your experiment, you would have to 
click 1000 times to start each front-end, also not very practicable.

- Having 100's or 1000's of front-ends calls rather for a hierarchical design, like the LHC experiments have. That would be a major change of midas and cannot be done quickly. It would also result in 
much slower run start/stops.

- If you see limitations with your LabVIEW PCs, have you considered multi-threading on your front-ends? Note that the standard midas slow control system supports multithreaded devices 
(DF_MULTITHREAD). In MEG, we use about 800 microcontrollers via the MSCB protocol. They are grouped together and each group is a multithreaded device in the midas slow control lingo, meaning the 
group gets its own thread for control and readout in the midas frontend. This way, one group cannot slow down all other groups. There is one front-end for all groups, which can be started/stopped with 
a single click, it shows up just as one line in the status page, and still it's pretty fast. Have you considered such a scheme? So your LabVIEW PCs would not be individual front-ends, but just make a 
network connection to the midas front-end which then manages all LabVEIW PCs. The midas slow control system allows to define custom commands (besides the usual read/write command for slow 
control data), so you could maybe integrate all you need into that scheme.

Best,
Stefan

> 
> Hi all,
> I have been experimenting with a frontend solution for my experiment 
> (ALPHA). The intention to replace how we log data from PCs running LabVIEW. 
> I am at the proof of concept stage. So far I have some promising 
> performance, able to handle 10-100x more data in my test setup (current 
> limitations now are just network bandwith, MIDAS is impressively efficient). 
> ==========================================================================
> Our experiment has many PCs using LabVIEW which all log to MIDAS, the 
> experiment has grown such that we need some sort of load balancing in our 
> frontend.
> The concept was to have a 'supervisor frontend' and an array of 'worker 
> frontend' processes. 
> -A LabVIEW client would connect to the supervisor, then be referred to a 
> worker frontend for data logging. 
> -The supervisor could start a 'worker frontend' process as the demand 
> required.
> To increase accountability within the experiment, I intend to have a 'worker 
> frontend' per PC connecting. Then any rouge behavior would be clear from the 
> MIDAS frontpage.
> Presently there around 20-30 of these LabVIEW PCs, but given how the group 
> is growing, I want to be sure that my data logging solution will be viable 
> for the next 5-10 years. With the increased use of single board computers, I 
> chose the target of benchmarking upto 1000 worker frontends... but I quickly 
> hit the '64 MAX CLIENTS' and '64 RPC CONNECTION' limit. Ok...
> branching and updating these limits:
> https://bitbucket.org/tmidas/midas/branch/experimental-beyond_64_clients
> I have two commits. 
> 1. update the memory layout assertions and use MAX_CLIENTS as a variable
> https://bitbucket.org/tmidas/midas/commits/302ce33c77860825730ce48849cb810cf
> 366df96?at=experimental-beyond_64_clients
> 2. Change the MAX_CLIENTS and MAX_RPC_CONNECTION
> https://bitbucket.org/tmidas/midas/commits/f15642eea16102636b4a15c8411330969
> 6ce3df1?at=experimental-beyond_64_clients
> Unintended side effects:
> I break compatibility of existing ODB files... the database layout has 
> changed and I read my old ODB as corrupt. In my test setup I can start from 
> scratch but this would be horrible for any existing experiment.
> Edit: I noticed 'make testdiff' is failing... also fails lok
> Early performance results:
> In early tests, ~700 PCs logging 10 unique arrays of 10 doubles into 
> Equipment variables in the ODB seems to perform well... All transactions 
> from client PCs are finished within a couple of ms or less
> ==========================================================================
> Questions:
> Does the community here have strong opinions about increasing the 
> MAX_CLIENTS and MAX_RPC_CONNECTION limits? 
> Am I looking at this problem in a naive way?
> 
> Potential solutions other than increasing the MAX_CLIENTS limit:
> -Make worker threads inside the supervisor (not a separate process), I am 
> using TMFE, so I can dynamically create equipment. I have not yet taken a 
> deep dive into how any multithreading is implemented
> -One could have a round robin system to load balance between a limited pool 
> of 'worker frontend' proccesses. I don't like this solution as I want to 
> able to clearly see which client PCs have been setup to log too much data
> ==========================================================================
  1893   01 May 2020 Pierre GorelForumTaking MIDAS beyond 64 clients
> - On the other hand, if we have to break compatibility, now is maybe a good time since most accelerators worldwide are off. But before doing so, I would like to get feedback from the main experiments 
> around the world (MEG, T2K, g-2, DEAP besides ALPHA).

Hello Stefan,
For what is worth, DEAP will not be impacted: as we have been taking data around the clock for the last few years, we froze the code running on the computers. We may have some window of opportunity for upgrade in few months but such a move has not been discussed yet.

Best regards,
Pierre
ELOG V3.1.4-2e1708b5