Back Midas Rome Roody Rootana
  Midas DAQ System, Page 12 of 46  Not logged in ELOG logo
Entry  05 Jan 2021, Isaac Labrie Boulay, Bug Report, Logger: Disk nearly full. 
Hi all,

I've ran into a problem where my experiment gets interrupted with a message from 
the logger saying that my disk is nearly full. This does not make sense to me 
because I have deleted almost all the data files from my data directory. I'm 
guessing that somewhere the ODB perceives that the directory is full when in 
reality its not.

Here is the exact message:

[ODBEdit,INFO] Run #252 stopped

09:22:19 [Logger,TALK] disk nearly full, stopping the run

09:22:19 [Logger,ERROR] [mlogger.cxx:4475:log_write,ERROR] Disk '/home/caendaq/A       
NIS/data/run00252.mid.lz4' is almost full: 81 MiBytes free out of 922497 MiBytes       
, stopping the run

Does any body have a solution for this? Thanks so much.

Isaac
    Reply  06 Jan 2021, Stefan Ritt, Bug Report, Logger: Disk nearly full. 
The logger simple requests the disk free space level from the operating system in the same 
way as the "df" command does. Can you do a "df" on your system? I have seen that some file 
systems free up space not immediately if you delete files, but some times later (like 24h).

Stefan
       Reply  06 Jan 2021, Isaac Labrie Boulay, Bug Report, Logger: Disk nearly full. 
> The logger simple requests the disk free space level from the operating system in the same 
> way as the "df" command does. Can you do a "df" on your system? I have seen that some file 
> systems free up space not immediately if you delete files, but some times later (like 24h).
> 
> Stefan

Thanks Stefan. Yes the files were still held open by some processes. It's solved now.

Cheers.

Isaac
Entry  17 Dec 2020, Amy Roberts, Suggestion, Improving variable functionality in Sequencer? 
We're using the sequencer to manage runs, and this typically looks something like:

1. save ODB keys to variables via ODBGET
2. set ODB keys to new values for a "pre-run" process
3. return ODB keys to values created in line 1
4. take data

The problem I'm running into is that the list of ODB keys to save is pretty 
unwieldy.  I'm wondering if there are sequencer features that exist or that I could 
request that might make this easier.

For example, having a way to list ODB keys, save ODB directories, and load ODB 
directories would be much more concise way for me to write my script.

Another option might be to have some version of the ODBSET wildcards for ODBGET.  
Although for this, setting the variable names might be tricky.  

In any case, even being able to ODBGET an array and set that to one variable name 
would be a big improvement.
    Reply  05 Jan 2021, Amy Roberts, Suggestion, Improving variable functionality in Sequencer? 
Hello, just wanted to re-ping on this question now that folks are starting to get back from 
the holidays.
       Reply  06 Jan 2021, Stefan Ritt, Suggestion, Improving variable functionality in Sequencer? laser.msl
I guess you use a wrong pattern here. There is no need to copy ODB values to local variables, 
then change them, then write them back. You can rather directly write values to the ODB. We run 
all our experiments in that way and we can do what we want. So most of our scripts have sections 
like

 ODBSUBDIR "/Equipment/Laser/Variables"
   ODBSET "Setting[*]", 0, 0
   ODBSET "Output[1]", 0, 0
   ODBSET "Output[2]", 1, 0
   ODBSET "Output[3]", 0, 0
   ODBSET "Output[4]", 1, 1
 ENDODBSUBDIR

Note that both the path and the indices can contain wild cards, making this pattern more 
flexible. Wildcards are however not (yet) supported for local variables, that's why we use 
directly the ODBSET directive.

I attach a larger example from the MEG experiment here for your reference.

Stefan
Entry  18 Dec 2020, Stefan Ritt, Suggestion, Code formatting .clang-formatcnaf_callback_llvm.cxxcnaf_callback_root.cxxcnaf_callback_gnu.cxxcnaf_callback_google.cxx
May I ask for your quick opinion on code formatting. MIDAS had a coding style 
which pretty much followed the ROOT coding style described at

https://root.cern/contribute/coding_conventions/

so we followed the "3 spaces indent" convention, braces according to Kernigham & 
Ritchie and a few other things. I see however that code written by different 
people still is formatted differently, like spaces before and after comparators 
etc. I wonder if it would make sense to keep a consistent code formatting through 
the whole midas repository.

Looking again at what the ROOT guys doe (see link above), they have a ClangFormat 
file, which I attached to this post. Putting this file into the root of midas 
ensures that all files are formatted in exactly the same way, which would increase 
readability largely.

The nice thing with ClangFormat is that can be integrated into my editor (Clion) as 
well as in emacs and vim:

https://clang.llvm.org/docs/ClangFormat.html

This would also make the emacs settings in our files obsolete:

/* emacs
 * Local Variables:
 * tab-width: 8
 * c-basic-offset: 3
 * indent-tabs-mode: nil
 * End:
 */

I don't like these because they are only for people using emacs. If everybody would 
put statements into the files with their favourite editor, all our source files 
would be cluttered quite a bit.

So the question is now how style to use? I attached different trials with a simple 
file from the distribution, so you can see the differences. They use the style from

- LLVM
- ROOT
- GNU
- Google

I consciously skipped the "Microsoft" style ;-)

Which one should we settle on? Any opinion? If I don't hear anything, I will pick a 
style at the end of this year 2020. I have a slight favour of the ROOT style, although 
I don't like that the "case" is not indented there under the opening brace of the 
switch statement which seems inconsistent to me. The only one doing that right is the 
Google format, but that one has an indentation of 2 chars instead our usual 3 chars. 
At the end of the day I think it's not so important on which style we agree, as long 
as we DO have a common style for all midas files.

Best,
Stefan
    Reply  04 Jan 2021, Stefan Ritt, Suggestion, Code formatting .clang-format
After pondering over the holidays, I decided to use the widely used LLVM code formatting, 
just adapted slightly for 3 spaces and "case" indentation in a "switch" statement. This 
formatting is now very close to our original one. Nevertheless, I did not reformat all 
existing code, since that would screw up the git repository, and you cannot see then anymore 
who wrote which line of code. But having the .clang-format file now in the midas root, all 
NEW files fill follow that standard. 

The CLion editor automatically picks up the .clang-format file if your enable ClangFomrat 
via Preferences -> Code Style -> General -> Enable ClangFormat.

EMACS can also use this file by adding following lines to your .emacs:

(load "<path-to-clang>/tools/clang-format/clang-format.el")
(global-set-key [C-M-tab] 'clang-format-region)

One problem left is if you check out midas on a new machine, you might not have there your 
personal .emacs file. If there is a way to ship a .emacs with midas, which gets 
automatically loaded, I would be happy to put this into the distribution.

Stefan
Entry  16 Dec 2020, Isaac Labrie Boulay, Forum, Issues building banks. 
Hi all,

I'm currently trying to build events through doing block transfers. The worry was 
that organizing and packaging bank data into an array would produce too much dead 
time causing too many missed events. Trying out that method, I'm running into all 
sorts of issues such as unaligned transfers where the QDC events are unaligned, or 
improperly aligned banks. Giving me a headache.

My question is, if I were to revert back to simple 32 bit read cycles and using 
the fevme.cxx template's method of organizing data before sending them to the 
buffer, what kind of deadtime should I expect? Am I wrong to assume that this 
would result in deadtime at all? I'm using a CAEN V792n 16 channel QDC and the hit 
frequency that I'm using to test is 20kHz.

Thanks.

Isaac
    Reply  16 Dec 2020, Konstantin Olchanski, Forum, Issues building banks. 
> I'm currently trying to build events through doing block transfers.

I am confused by your question. I assume you read a CAEN V792 ADC, but I do not know what VME master you 
use. The restrictions on data alignment come from the VME master.
I am mostly familiar with restrictions of UniverseII and tsi148 PCI-VME bridges.
I think there is no restriction for USB-VME bridges and similar.

Anyhow. Which block transfer do you use? 32-bit block transfer (BLT32)? 64-bit block transfer (MBLT64)? 
(no 128-bit 2eVME/2eSST transfers from the V792). Maybe the "simulated block transfer" (DMA engine uses 
single-word reads instead of block transfer)?

> The worry was that organizing and packaging bank data into an array would produce too much dead time 
causing too many missed events.

Valid concerns.

> I'm running into all sorts of issues such as unaligned transfers where the QDC events are unaligned, or 
improperly aligned banks.

You should not see any problems with unaligned transfers if you give the DMA engine
correct memory addresses as required by the hardware:

- always aligned to 32-bit (4 bytes, last two address bits set to 0)
- aligned to 64-bits for MBLT64 64-bit transfers, this would be the normal case for the V792 (8 bytes, 
last 3 address bits set to 0)
- aligned to 128-bits for 2eVME/2eSST transfers (16 bytes, last 4 bits of address are zero).

You also need to specify correct amount of data to read: number of bytes should be multiple of 4 for 32-
bit transfers, multiple to 8 for 64-bit transfers and multiple of 16 for 128-bit transfers (2eVME/2eSST).

Very often this requires reading "extra" data words. Most VME modules can generate extra pad words to 
align event length to DMA restrictions. Sometimes you need to
enable this in a control register (V792, V1190).

> Giving me a headache.

Me too. MIDAS recently introduced the QWORD 64-bit data type, banks of this type
should have correct alignment for 64-bit VME block transfers. But for 2eVME/2eSST
transfers, I still have to ensure alignment "by hand" (SIS3820, VF48, etc).

With QWORD banks, you need to use bk_init32a() instead of bk_init32().

> My question is, if I were to revert back to simple 32 bit read cycles

Yes, I always test with single-word reads first, with the 32-bit block transfer second and try the 64-bit 
block transfer last.

Sometimes there are unrelated problems (with the VME modules, VME bus, etc, or
with bugs in the frontend, etc) and this approach helps to identify the source
of trouble.

> and using 
> the fevme.cxx template's method of organizing data before sending them to the 
> buffer, what kind of deadtime should I expect? Am I wrong to assume that this 
> would result in deadtime at all? I'm using a CAEN V792n 16 channel QDC and the hit 
> frequency that I'm using to test is 20kHz.

Yes, with asynchronous read using 64-bit block transfer, 20 kHz should be achievable.

The old fevme frontend is based on the mfe.c framework and implementing
async readout requires special contortions. The structure of the new TMFE C++ frontend
class is supposed to make it easier, but I do not have an example TMFE based fevme yet.

P.S. Without using block transfer, your max rate is limited to:

16 channels, 1 word per channel, plus 1 header and 1 footer = 18 words (by luck, 64-bit aligned for 
correct BLT64 block read).

using VME single-word read at 1 us per transfer, 18 us per event = 55 kHz repetition rate.

(you do not say if you have any other VME modules you have to read)

K.O.
       Reply  16 Dec 2020, Isaac Labrie Boulay, Forum, Issues building banks. 
Thanks for the quick reply,

> > I'm currently trying to build events through doing block transfers.
> 
> I am confused by your question. I assume you read a CAEN V792 ADC, but I do not know what VME master you 
> use. The restrictions on data alignment come from the VME master.
> I am mostly familiar with restrictions of UniverseII and tsi148 PCI-VME bridges.
> I think there is no restriction for USB-VME bridges and similar.
> 
> Anyhow. Which block transfer do you use? 32-bit block transfer (BLT32)? 64-bit block transfer (MBLT64)? 
> (no 128-bit 2eVME/2eSST transfers from the V792). Maybe the "simulated block transfer" (DMA engine uses 
> single-word reads instead of block transfer)?

I read a single CAEN V792n QDC, 18 words, and a single CAEN V1190 TDC, 2 channels so 8 words. When I poll, I 
read on every poll_event() and read whatever data is in whatever module (TDC_dataready || QDC_dataready). The 
VME master that I'm using to talk to the modules is a CAEN V1718. I am trying to read data by BLT32. Sorry for 
the confusing question (Can you tell I'm an intern?).

> > The worry was that organizing and packaging bank data into an array would produce too much dead time 
> causing too many missed events.
> 
> Valid concerns.
> 
> > I'm running into all sorts of issues such as unaligned transfers where the QDC events are unaligned, or 
> improperly aligned banks.
> 
> You should not see any problems with unaligned transfers if you give the DMA engine
> correct memory addresses as required by the hardware:
> 
> - always aligned to 32-bit (4 bytes, last two address bits set to 0)
> - aligned to 64-bits for MBLT64 64-bit transfers, this would be the normal case for the V792 (8 bytes, 
> last 3 address bits set to 0)
> - aligned to 128-bits for 2eVME/2eSST transfers (16 bytes, last 4 bits of address are zero).
> 
> You also need to specify correct amount of data to read: number of bytes should be multiple of 4 for 32-
> bit transfers, multiple to 8 for 64-bit transfers and multiple of 16 for 128-bit transfers (2eVME/2eSST).

I am transferring 32-bit words. Transferring 32-bit words should always read multiples of 4 bytes so that's 
good.

> Very often this requires reading "extra" data words. Most VME modules can generate extra pad words to 
> align event length to DMA restrictions. Sometimes you need to
> enable this in a control register (V792, V1190).
> 
> > Giving me a headache.
> 
> Me too. MIDAS recently introduced the QWORD 64-bit data type, banks of this type
> should have correct alignment for 64-bit VME block transfers. But for 2eVME/2eSST
> transfers, I still have to ensure alignment "by hand" (SIS3820, VF48, etc).
> 
> With QWORD banks, you need to use bk_init32a() instead of bk_init32().
> 
> > My question is, if I were to revert back to simple 32 bit read cycles
> 
> Yes, I always test with single-word reads first, with the 32-bit block transfer second and try the 64-bit 
> block transfer last.
> 
> Sometimes there are unrelated problems (with the VME modules, VME bus, etc, or
> with bugs in the frontend, etc) and this approach helps to identify the source
> of trouble.
> 
> > and using 
> > the fevme.cxx template's method of organizing data before sending them to the 
> > buffer, what kind of deadtime should I expect? Am I wrong to assume that this 
> > would result in deadtime at all? I'm using a CAEN V792n 16 channel QDC and the hit 
> > frequency that I'm using to test is 20kHz.
> 
> Yes, with asynchronous read using 64-bit block transfer, 20 kHz should be achievable.
> 
> The old fevme frontend is based on the mfe.c framework and implementing
> async readout requires special contortions. The structure of the new TMFE C++ frontend
> class is supposed to make it easier, but I do not have an example TMFE based fevme yet.
> 
> P.S. Without using block transfer, your max rate is limited to:
> 
> 16 channels, 1 word per channel, plus 1 header and 1 footer = 18 words (by luck, 64-bit aligned for 
> correct BLT64 block read).
> 
> using VME single-word read at 1 us per transfer, 18 us per event = 55 kHz repetition rate.
> 
> (you do not say if you have any other VME modules you have to read)
> 

Okay so transferring 18 + 6 words should give me close to 40kHz repetition rate. That's good news. I will just 
stick to 1 word transfers.

The way that transfers are done in the fevme.cxx requires iterating through 16 word arrays a number of time (3 
times I believe if you include the iterations taking place in v792_EventRead()). Does that not pose a 
significant deadtime concern? 

> K.O.

Thanks again for taking the time to help me out!

Cheers.

Isaac
          Reply  16 Dec 2020, Konstantin Olchanski, Forum, Issues building banks. 
> > > I'm currently trying to build events through doing block transfers.
> > 
> > I am confused by your question. I assume you read a CAEN V792 ADC, but I do not know what VME master you 
> > use. The restrictions on data alignment come from the VME master.
> > I am mostly familiar with restrictions of UniverseII and tsi148 PCI-VME bridges.
> > I think there is no restriction for USB-VME bridges and similar.
> > 
> > Anyhow. Which block transfer do you use? 32-bit block transfer (BLT32)? 64-bit block transfer (MBLT64)? 
> > (no 128-bit 2eVME/2eSST transfers from the V792). Maybe the "simulated block transfer" (DMA engine uses 
> > single-word reads instead of block transfer)?
> 
> I read a single CAEN V792n QDC, 18 words, and a single CAEN V1190 TDC, 2 channels so 8 words. When I poll, I 
> read on every poll_event() and read whatever data is in whatever module (TDC_dataready || QDC_dataready). The 
> VME master that I'm using to talk to the modules is a CAEN V1718. I am trying to read data by BLT32. Sorry for 
> the confusing question (Can you tell I'm an intern?).
> 

Ok, I see. Using the normal mfe.c structure, you will not be able to read the VME modules
at maximum speed. This is because you must have two concurrent activities happening at the same time:

(1) tell the VME bridge to read data,
(2) package this data into midas banks and events and write it to the MIDAS event buffer.

If you do these tasks sequentially, obviously the VME bus will be idle during step (2),
and unless (2) takes 0 seconds (it does not) you will have a slow down.

So for maximum data rate, I prefer to have 3 threads:

thread 1: run the VME transfers, store data in circular buffer (today it would be std::deque<std::vector<char>>)
thread 2: encode the data into midas banks and midas events, store completed events in a circular buffer 
(std::deque<EVENT_HEADER*>).
thread 3: write data to midas event buffer (call bm_send_event(), etc)

This is very hard to do using the mfe.c frontend. (the main reason I wrote the TMFE C++ frontend class).

>
> Okay so transferring 18 + 6 words should give me close to 40kHz repetition rate. That's good news. I will just 
> stick to 1 word transfers.
>

I do not know the timing of CAEN V1718 single-word transfers. It may be significantly longer than 1 us:

V7865: DWORD read - CPU - PCI bus - tsi148 - VME
V1718: encode request as USB packet - CPU - PCI bus - USB hub - USB bus - USB asic - FPGA - VME (on the way back, 
"extract data from USB packet")

> 
> The way that transfers are done in the fevme.cxx requires iterating through 16 word arrays a number of time (3 
> times I believe if you include the iterations taking place in v792_EventRead()). Does that not pose a 
> significant deadtime concern? 
> 

Hmm... I am not sure what fevme you refer to. I guess I can find version of fevme.cxx where data is read at
maximum VME speed if you want it.

K.O.
             Reply  16 Dec 2020, Isaac Labrie Boulay, Forum, Issues building banks. 
> > > > I'm currently trying to build events through doing block transfers.
> > > 
> > > I am confused by your question. I assume you read a CAEN V792 ADC, but I do not know what VME master you 
> > > use. The restrictions on data alignment come from the VME master.
> > > I am mostly familiar with restrictions of UniverseII and tsi148 PCI-VME bridges.
> > > I think there is no restriction for USB-VME bridges and similar.
> > > 
> > > Anyhow. Which block transfer do you use? 32-bit block transfer (BLT32)? 64-bit block transfer (MBLT64)? 
> > > (no 128-bit 2eVME/2eSST transfers from the V792). Maybe the "simulated block transfer" (DMA engine uses 
> > > single-word reads instead of block transfer)?
> > 
> > I read a single CAEN V792n QDC, 18 words, and a single CAEN V1190 TDC, 2 channels so 8 words. When I poll, I 
> > read on every poll_event() and read whatever data is in whatever module (TDC_dataready || QDC_dataready). The 
> > VME master that I'm using to talk to the modules is a CAEN V1718. I am trying to read data by BLT32. Sorry for 
> > the confusing question (Can you tell I'm an intern?).
> > 
> 
> Ok, I see. Using the normal mfe.c structure, you will not be able to read the VME modules
> at maximum speed. This is because you must have two concurrent activities happening at the same time:
> 

I am using the mfe.cxx backend thread, I'm guessing that this is the file you are referring to.

> (1) tell the VME bridge to read data,
> (2) package this data into midas banks and events and write it to the MIDAS event buffer.
> 
> If you do these tasks sequentially, obviously the VME bus will be idle during step (2),
> and unless (2) takes 0 seconds (it does not) you will have a slow down.
> 

I see.


> So for maximum data rate, I prefer to have 3 threads:
> 
> thread 1: run the VME transfers, store data in circular buffer (today it would be std::deque<std::vector<char>>)
> thread 2: encode the data into midas banks and midas events, store completed events in a circular buffer 
> (std::deque<EVENT_HEADER*>).
> thread 3: write data to midas event buffer (call bm_send_event(), etc)
> 
> This is very hard to do using the mfe.c frontend. (the main reason I wrote the TMFE C++ frontend class).

Yes it seems like a bit of work

> >
> > Okay so transferring 18 + 6 words should give me close to 40kHz repetition rate. That's good news. I will just 
> > stick to 1 word transfers.
> >
> 
> I do not know the timing of CAEN V1718 single-word transfers. It may be significantly longer than 1 us:
> 
> V7865: DWORD read - CPU - PCI bus - tsi148 - VME
> V1718: encode request as USB packet - CPU - PCI bus - USB hub - USB bus - USB asic - FPGA - VME (on the way back, 
> "extract data from USB packet")

I found the following information in the CAEN V1718 manual:

"Transfer Rate = ~30MByte/s. Transfer rate supported in MBLT read cycles (block size = 32 kb), using a PC host with 
Windows XP or Linux and High Speed USB"

I'm guessing the sentence simply means that the rate increases with multiplexed block transfers. If the transfer rate 
is 30MBytes/s I should be able to write words at a transfer rate of 7500000 words per second.

> 
> > 
> > The way that transfers are done in the fevme.cxx requires iterating through 16 word arrays a number of time (3 
> > times I believe if you include the iterations taking place in v792_EventRead()). Does that not pose a 
> > significant deadtime concern? 
> > 
> 
> Hmm... I am not sure what fevme you refer to. I guess I can find version of fevme.cxx where data is read at
> maximum VME speed if you want it.

This is the VME C++ frontend example in the directory /midas/examples/Triumf/c++/

If you can find a faster version of this code I would definitely like to check it out!

> 
> K.O.


Thanks again.

Isaac
             Reply  16 Dec 2020, Stefan Ritt, Forum, Issues building banks. 
> This is very hard to do using the mfe.c frontend. (the main reason I wrote the TMFE C++ frontend class).

Actually that's not true. Just look at 

midas/examples/mtfe/mtfe.c

this is an example for a frontend with equipment with the EQ_USER flag, which allows you easily to run a separate 
thread (or more) for event collection and processing. Of course all old-fashioned C style (code is from 2007) but it 
works.

Stefan
                Reply  16 Dec 2020, Isaac Labrie Boulay, Forum, Issues building banks. 
> > This is very hard to do using the mfe.c frontend. (the main reason I wrote the TMFE C++ frontend class).
> 
> Actually that's not true. Just look at 
> 
> midas/examples/mtfe/mtfe.c
> 
> this is an example for a frontend with equipment with the EQ_USER flag, which allows you easily to run a separate 
> thread (or more) for event collection and processing. Of course all old-fashioned C style (code is from 2007) but it 
> works.
> 
> Stefan

Thank you sir I'll give it a look.

Cheers

Isaac
Entry  24 Nov 2020, Isaac Labrie Boulay, Forum, Invalid name "Analyzer/Tests" 
Hi everyone,

I've recently took the analyzer template from $MIDASSYS/examples/experiment and 
modified it to be able to use Roody on a very simple frontend setup. The 
analyzer works fine and I am able to view the online histograms but my console 
prints out this error:

[Analyzer,ERROR] [odb.cxx:919:db_validate_name,ERROR] Invalid name 
"/Analyzer/Tests/Always true/Rate [Hz]" passed to db_create_key_wlocked: should 
not contain "["                      
[Analyzer,ERROR] [odb.cxx:919:db_validate_name,ERROR] Invalid name 
"/Analyzer/Tests/low_sum/Rate [Hz]" passed to db_create_key_wlocked: should not 
contain "["
[Analyzer,ERROR] [odb.cxx:919:db_validate_name,ERROR] Invalid name 
"/Analyzer/Tests/high_sum/Rate [Hz]" passed to db_create_key_wlocked: should not 
contain "["

The error keeps getting printed even after stopping the run.

I do have these 3 keys under Analyzer/Tests/ in my ODB but I do not know where 
they come from. Any suggestions on what the root of the issue is?

Thanks for the help!

Isaac
    Reply  27 Nov 2020, Konstantin Olchanski, Forum, Invalid name "Analyzer/Tests" 
> I've recently took the analyzer template from $MIDASSYS/examples/experiment and 
> modified it to be able to use Roody on a very simple frontend setup.

Hmm... the old midas analyzer framework is very old and I do not recommend
to use it for new experiments.

A newer analyzer system is ROOTANA and an even newer is the "m" analyzer (manalyzer). These
analyzers progressively introduce improved c++-style programming environments amongst other
improvements. If starting from scratch, I recommend that you use manalyzer (currently from the rootana
git repository).

> The analyzer works fine and I am able to view the online histograms but my console 
> prints out this error:
> 
> [Analyzer,ERROR] [odb.cxx:919:db_validate_name,ERROR] Invalid name 
> "/Analyzer/Tests/Always true/Rate [Hz]" passed to db_create_key_wlocked: should 
> not contain "["

The error says what it means. "[" is not a permitted character in odb names. It is used
by many odb functions to access array elements.

The midas analyzer example should be updated to change "[Hz]" to "(Hz)" or something similar.

K.O.
       Reply  27 Nov 2020, Konstantin Olchanski, Forum, Invalid name "Analyzer/Tests" 
https://bitbucket.org/tmidas/midas/issues/298/invalid-odb-names-in-example-midas
K.O.
          Reply  07 Dec 2020, Isaac Labrie Boulay, Forum, Invalid name "Analyzer/Tests" 
> https://bitbucket.org/tmidas/midas/issues/298/invalid-odb-names-in-example-midas
> K.O.

Hi K.O.

Ok I see, I will use the most up to date analyzer.

Thanks a ton for your help.

Isaac
Entry  24 Sep 2020, Gennaro Tortone, Forum, subrun  
Hi,

I was wondering if there is a "mechanism" to run an executable
file after each subrun is closed...

I need to convert .mid.lz4 subrun files to ROOT (TTree) files;

Thanks,
Gennaro
    Reply  01 Dec 2020, Stefan Ritt, Forum, subrun  
There is no "mechanism" foreseen to be executed after each subrun. But you could 
run a shell script after each run which loops over all subruns and converts them 
one after the other.

Stefan

> Hi,
> 
> I was wondering if there is a "mechanism" to run an executable
> file after each subrun is closed...
> 
> I need to convert .mid.lz4 subrun files to ROOT (TTree) files;
> 
> Thanks,
> Gennaro
       Reply  01 Dec 2020, Ben Smith, Forum, subrun  
We use the lazylogger for something similar to this. You can specify the path to a custom script, and it will be run for each midas file that gets written:
https://midas.triumf.ca/MidasWiki/index.php/Lazylogger#Using_a_script
This means that you don't have to wait until the end of the run to start processing.

If the ROOT conversion is going to be slow, but you have a batch system available, you could use the lazylogger script to submit a job to the batch system for each file.

> 
> > Hi,
> > 
> > I was wondering if there is a "mechanism" to run an executable
> > file after each subrun is closed...
> > 
> > I need to convert .mid.lz4 subrun files to ROOT (TTree) files;
> > 
> > Thanks,
> > Gennaro
Entry  30 Nov 2020, Konstantin Olchanski, Info, more wisdom from linux kernel people 
As you may know, I am a big fan of two software projects - the linux kernel and ROOT. The linux kernel is one of 
the few software projects "done right". ROOT is where normal people try to "get it right" with real-world level 
of success. I use both softwares daily and I try to apply their ways and methods to MIDAS as much as I can.

So just in time for our discussion of array indexes, a talk by gregkh shows
up on slashdot. The title is "how to keep your users happy". (Nobody
ever wants to be nasty to their users, but do read his talk).

https://git.sr.ht/~gregkh/presentation-application_summit/tree/main/keep_users_happy.pdf

The talk refers to some older stuff, still relevant, of course, in case you miss the links
in the pdf file, here they are:

https://ozlabs.org/~rusty/index.cgi/tech/2008-03-30.html
https://ozlabs.org/~rusty/index.cgi/tech/2008-04-01.html
https://ozlabs.org/~rusty/ols-2003-keynote/img0.html (click on "continue" to see next page)

K.O.
Entry  24 Nov 2020, Amy Roberts, Suggestion, ODBSET wildcards with array keys in Sequencer files 
I'm interested in using the matching feature for ODBSET explained on 
https://midas.triumf.ca/MidasWiki/index.php/Sequencer for settings that are in an 
array, like:

COMMENT "Ground the detectors"
ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[?]" 0

Currently I get an error when I try to run this script.  Is this expected?  Would it 
be possible to implement matching for array values?

Thanks!
    Reply  25 Nov 2020, Marco Francesconi, Suggestion, ODBSET wildcards with array keys in Sequencer files 
Hi,
I guess the issue is in the "[?]" part of the command, the indexing is handled differently from the odb path and does not 
support "?".
Are you trying to set only the first 9 channels?
Could you try with "[*]" or "[0-9]" instead?

Marco

> I'm interested in using the matching feature for ODBSET explained on 
> https://midas.triumf.ca/MidasWiki/index.php/Sequencer for settings that are in an 
> array, like:
> 
> COMMENT "Ground the detectors"
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[?]" 0
> 
> Currently I get an error when I try to run this script.  Is this expected?  Would it 
> be possible to implement matching for array values?
> 
> Thanks!
       Reply  25 Nov 2020, Amy Roberts, Suggestion, ODBSET wildcards with array keys in Sequencer files 
The following all fail with "Cannot find ODB key "<key>""

ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[*]" 0
ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[0-9]" 0
ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[1]" 0
ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)*" 0
ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)" 0


> Hi,
> I guess the issue is in the "[?]" part of the command, the indexing is handled differently from the odb path and does not 
> support "?".
> Are you trying to set only the first 9 channels?
> Could you try with "[*]" or "[0-9]" instead?
> 
> Marco
> 
> > I'm interested in using the matching feature for ODBSET explained on 
> > https://midas.triumf.ca/MidasWiki/index.php/Sequencer for settings that are in an 
> > array, like:
> > 
> > COMMENT "Ground the detectors"
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[?]" 0
> > 
> > Currently I get an error when I try to run this script.  Is this expected?  Would it 
> > be possible to implement matching for array values?
> > 
> > Thanks!
          Reply  25 Nov 2020, Marco Francesconi, Suggestion, ODBSET wildcards with array keys in Sequencer files 
I created some keys in my ODB to try to match yours.
The ODBSET commands you wrote are all working fine (of course with different results), except only for the "/Detectors/Det*/Settings/Charge/Bias (V)*" which I will have to 
look into.
In any case the error message I'm getting is "could not match ay key" and not the one you are reporting.

Now I'm a bit puzzled:
Are you sure your ODB contains those keys?
Are you testing the ODBSET inside a more complex sequencer or on its own?

Maybe I can try to reproduce it using your ODB setup.
Could you send an ODB dump of the "/Detectors" folder using the "save" command of odbedit ("cd /Detectors" and then "save detector.odb")?

Best,

Marco


> The following all fail with "Cannot find ODB key "<key>""
> 
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[*]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[0-9]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[1]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)*" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)" 0
> 
> 
> > Hi,
> > I guess the issue is in the "[?]" part of the command, the indexing is handled differently from the odb path and does not 
> > support "?".
> > Are you trying to set only the first 9 channels?
> > Could you try with "[*]" or "[0-9]" instead?
> > 
> > Marco
> > 
> > > I'm interested in using the matching feature for ODBSET explained on 
> > > https://midas.triumf.ca/MidasWiki/index.php/Sequencer for settings that are in an 
> > > array, like:
> > > 
> > > COMMENT "Ground the detectors"
> > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[?]" 0
> > > 
> > > Currently I get an error when I try to run this script.  Is this expected?  Would it 
> > > be possible to implement matching for array values?
> > > 
> > > Thanks!
             Reply  25 Nov 2020, Amy Roberts, Suggestion, ODBSET wildcards with array keys in Sequencer files 
I think the issue may be the version of MIDAS I'm using.  Mine is current as of February 4, 2020.  

But since then there have been changes to the sequencer code, specifically parts that handle indexing.

I'll try this out with an updated version of MIDAS and report back if there are still any issues after updating.

> I created some keys in my ODB to try to match yours.
> The ODBSET commands you wrote are all working fine (of course with different results), except only for the "/Detectors/Det*/Settings/Charge/Bias (V)*" which I will have to 
> look into.
> In any case the error message I'm getting is "could not match ay key" and not the one you are reporting.
> 
> Now I'm a bit puzzled:
> Are you sure your ODB contains those keys?
> Are you testing the ODBSET inside a more complex sequencer or on its own?
> 
> Maybe I can try to reproduce it using your ODB setup.
> Could you send an ODB dump of the "/Detectors" folder using the "save" command of odbedit ("cd /Detectors" and then "save detector.odb")?
> 
> Best,
> 
> Marco
> 
> 
> > The following all fail with "Cannot find ODB key "<key>""
> > 
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[*]" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[0-9]" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[1]" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)*" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)" 0
> > 
> > 
> > > Hi,
> > > I guess the issue is in the "[?]" part of the command, the indexing is handled differently from the odb path and does not 
> > > support "?".
> > > Are you trying to set only the first 9 channels?
> > > Could you try with "[*]" or "[0-9]" instead?
> > > 
> > > Marco
> > > 
> > > > I'm interested in using the matching feature for ODBSET explained on 
> > > > https://midas.triumf.ca/MidasWiki/index.php/Sequencer for settings that are in an 
> > > > array, like:
> > > > 
> > > > COMMENT "Ground the detectors"
> > > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[?]" 0
> > > > 
> > > > Currently I get an error when I try to run this script.  Is this expected?  Would it 
> > > > be possible to implement matching for array values?
> > > > 
> > > > Thanks!
          Reply  27 Nov 2020, Konstantin Olchanski, Suggestion, ODBSET wildcards with array keys in Sequencer files 
> The following all fail with "Cannot find ODB key "<key>""
> 
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[*]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[0-9]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[1]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)*" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)" 0
> 

It would be cool if ODB pattern matching in the sequencer
were consistent with the ODB pattern matching in the json-rpc
interface for web pages:

https://midas.triumf.ca/MidasWiki/index.php/Mjsonrpc#Supported_array_index_syntax

K.O.
             Reply  30 Nov 2020, Marco Francesconi, Suggestion, ODBSET wildcards with array keys in Sequencer files 
I totally agree that we should have a consistent formatting for array index expansion.
I had a look to the mjsonrpc code and I found the function parse_array_index_list(...) which does this job.
I have a similar function (adapted form previous code) in odb.cxx called strarrayindex(...) that is designed for the same "consistency" purposes between odbedit and sequencer.

Let me put few points that I noticed:
- mjsonrpc has a very different way to write the full array (no indexes given) while currently sequencer requires "[*]" to do the same (otherwise it only changes the first value of the array)
- currently the sequencer and the underlying ODB calls use two indexes (that are the same if you want to write only one key) so we will need a serious rewriting to allow something like "ODBSET array[1,3,5]"
- if I correctly understood the code, mjsonrpc instead generates a list of indices and then calls an ODB write on each of them. That's not always a good thing, for example if you are writing an array of n parameters on a DAQ 
board you will call the hotlink on that key n times
- in addition to that the sequencer will also have to cope with variable-based indexes like "ODBSET array[$val]", but then how it should parse something like "[$a,1]" or "[$a*]"?

For the very first point I do not see a clean way to do this without breaking the compatibility of existing sequencers or having a difference between the two implementations.
For the others I guess we can find a way out, however that's a major modification so I will put it on my todo list when I can find some free time.
In any case I would propose to merge the two functions, so we have only to maintain a single implementation of the parsing.

I guess it's a good moment to brainstorm about that, let me know what you think

Marco


> > The following all fail with "Cannot find ODB key "<key>""
> > 
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[*]" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[0-9]" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[1]" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)*" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)" 0
> > 
> 
> It would be cool if ODB pattern matching in the sequencer
> were consistent with the ODB pattern matching in the json-rpc
> interface for web pages:
> 
> https://midas.triumf.ca/MidasWiki/index.php/Mjsonrpc#Supported_array_index_syntax
> 
> K.O.
                Reply  30 Nov 2020, Konstantin Olchanski, Suggestion, ODBSET wildcards with array keys in Sequencer files 
> I totally agree that we should have a consistent formatting for array index expansion.
> I had a look to the mjsonrpc code and I found the function parse_array_index_list(...) which does this job.

Yes, it is good to review this stuff. I think the json-rpc call should accept more array index patterns:

a[*] - whole array (even though it is unnatural use in javascript, we do not say "let a[*] = b[*]", we say "let a = b".
a[5-] - from 5th element to the end (in the case we do not know the length)
a[-10] - from first element to element 10 (this is same as a[0-10], but needed for consistency with previous case).

K.O.

> I have a similar function (adapted form previous code) in odb.cxx called strarrayindex(...) that is designed for the same "consistency" purposes between odbedit and sequencer.
> 
> Let me put few points that I noticed:
> - mjsonrpc has a very different way to write the full array (no indexes given) while currently sequencer requires "[*]" to do the same (otherwise it only changes the first value of the array)
> - currently the sequencer and the underlying ODB calls use two indexes (that are the same if you want to write only one key) so we will need a serious rewriting to allow something like "ODBSET array[1,3,5]"
> - if I correctly understood the code, mjsonrpc instead generates a list of indices and then calls an ODB write on each of them. That's not always a good thing, for example if you are writing an array of n parameters on a DAQ 
> board you will call the hotlink on that key n times
> - in addition to that the sequencer will also have to cope with variable-based indexes like "ODBSET array[$val]", but then how it should parse something like "[$a,1]" or "[$a*]"?
> 
> For the very first point I do not see a clean way to do this without breaking the compatibility of existing sequencers or having a difference between the two implementations.
> For the others I guess we can find a way out, however that's a major modification so I will put it on my todo list when I can find some free time.
> In any case I would propose to merge the two functions, so we have only to maintain a single implementation of the parsing.
> 
> I guess it's a good moment to brainstorm about that, let me know what you think
> 
> Marco
> 
> 
> > > The following all fail with "Cannot find ODB key "<key>""
> > > 
> > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[*]" 0
> > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[0-9]" 0
> > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[1]" 0
> > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)*" 0
> > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)" 0
> > > 
> > 
> > It would be cool if ODB pattern matching in the sequencer
> > were consistent with the ODB pattern matching in the json-rpc
> > interface for web pages:
> > 
> > https://midas.triumf.ca/MidasWiki/index.php/Mjsonrpc#Supported_array_index_syntax
> > 
> > K.O.
             Reply  30 Nov 2020, Stefan Ritt, Suggestion, ODBSET wildcards with array keys in Sequencer files 
Hi Konstantin,

we are considering to make the range selection uniform among json, sequencer and 
odbedit "set" command. Having multiple ranges like [1,4-5] will be quite some work, so 
my question is did you just implement it on the json side because it was easy, or are 
there experiments who really need it? Wouldn't it be enough to have

[*]
[n]
[n-m]

This way we always have only one db_set_data() value behind that. Any set of indices 
we have to split into several db_set_data(), which especially for the front-end 
configuration can cause trouble by triggering a hot link on each access.

Stefan
                Reply  30 Nov 2020, Konstantin Olchanski, Suggestion, ODBSET wildcards with array keys in Sequencer files 
> 
> we are considering to make the range selection uniform among json, sequencer and 
> odbedit "set" command. Having multiple ranges like [1,4-5] will be quite some work, so 
> my question is did you just implement it on the json side because it was easy, or are 
> there experiments who really need it? Wouldn't it be enough to have
> 
> [*]
> [n]
> [n-m]
> 

It has been a long time, but most likely I designed the interface to work this
way to permit maximum flexibility for writing into an array using just one
rpc operation.

The generalized form is:
[range,range,range...]

where range is:
array index or
index1-index2 or
index2-index1 (write in reverse order)

This is all documented here:
https://midas.triumf.ca/MidasWiki/index.php/Mjsonrpc#Supported_array_index_syntax

I think it is too late to change it.

>
> This way we always have only one db_set_data() value behind that. Any set of indices 
> we have to split into several db_set_data()
> 

Sounds good. I think it is easy to have a common implementation:

One would need following functions:
parse range selection from string into std::vector<int> of array indices (we already have it)
call db_set_data_range() (this is easy to add).

>
> trouble by triggering a hot link on each access.
>

There is no escaping this trouble. Half the experiments want notification
"per array", the other half, "per array element". We cannot choose one or the other for them,
we have to provide a way for the user to say how they want it.

P.S. With existing ODB calls, you cannot do [n-m] ranges. You can do whole array or you
can do one-element-at-a-time.

K.O.
Entry  17 Nov 2020, Stefan Ritt, Info, Equipment "common" settings in ODB 
Today I addressed a topic which bugged me since long time. The ODB contains 
settings under /Equipment/<name>/Common which are a "mirror" of the equipment[] 
setting in a frontend (using the mfe.cxx framework). If the "Common" entry in 
the ODB is not present (fresh experiment), the equipment[] settings from the 
frontend are copied to the ODB. But if it exists, it takes precedence over the 
equipment[] entries, which is wrong in my opinion. Like if you change some 
settings in equipment[] (like the logging period of the history), then recompile 
and restart the frontend, the old values in the ODB are kept and your 
modification in the frontend code has no effect.

Starting on commit c3017c6c on Nov. 17th 2020 I reversed the precedence: Now, on 
each start of the frontend program, the values from equipment[] are written to 
the ODB. They are still "live". If one changes them when the frontend is 
running, that change takes effect immediately. But on the next restart of the 
frontend, the old values from equipment[] is put back there.

I fell too many times into this trap, and I hope the modification helps 
everybody. If there are however experiments which rely on the fact that the 
common settings in the ODB are NOT overwritten by the frontend, please let me 
know and I can put a flag "EQUIPMENT_FE_PRECEDENCE = FALSE" somewhere to restore 
the old behaviour.

Stefan
    Reply  20 Nov 2020, Pierre-Andre Amaudruz, Info, Equipment "common" settings in ODB 
Indeed this "mirror" of the ODB in settings option can cause frustration in 
particular when we think the ODB is empty but is not.
In the other hand, over time the settings are adjusted to a particular 
configuration or touched or not by the individual run preset parameters. Later, if 
a bug or code correction requires multiple restart of the fe, for every start of 
the application, you loose the latest configuration. This can be frustrating as 
well until you force a post-setting or report the specifics parameters in the fe 
code.
BTW I believe, we originally went for the ODB priority for that specific reason.
 
I would be in favour for having a general flag (FALSE) in /experiment which would 
define this global behaviour.  
PAA

> Today I addressed a topic which bugged me since long time. The ODB contains 
> settings under /Equipment/<name>/Common which are a "mirror" of the equipment[] 
> setting in a frontend (using the mfe.cxx framework). If the "Common" entry in 
> the ODB is not present (fresh experiment), the equipment[] settings from the 
> frontend are copied to the ODB. But if it exists, it takes precedence over the 
> equipment[] entries, which is wrong in my opinion. Like if you change some 
> settings in equipment[] (like the logging period of the history), then recompile 
> and restart the frontend, the old values in the ODB are kept and your 
> modification in the frontend code has no effect.
> 
> Starting on commit c3017c6c on Nov. 17th 2020 I reversed the precedence: Now, on 
> each start of the frontend program, the values from equipment[] are written to 
> the ODB. They are still "live". If one changes them when the frontend is 
> running, that change takes effect immediately. But on the next restart of the 
> frontend, the old values from equipment[] is put back there.
> 
> I fell too many times into this trap, and I hope the modification helps 
> everybody. If there are however experiments which rely on the fact that the 
> common settings in the ODB are NOT overwritten by the frontend, please let me 
> know and I can put a flag "EQUIPMENT_FE_PRECEDENCE = FALSE" somewhere to restore 
> the old behaviour.
> 
> Stefan
    Reply  27 Nov 2020, Konstantin Olchanski, Info, Equipment "common" settings in ODB 
> Today I addressed a topic which bugged me since long time.

Right. No easy subject. For me, too, this has been a problem in MIDAS for a long time.

> Now, on each start of the frontend program, the values from equipment[] are written to 
> the ODB. They are still "live". If one changes them when the frontend is 
> running, that change takes effect immediately. But on the next restart of the 
> frontend, the old values from equipment[] is put back there.

There is a downside from this behaviour.

If some values in equipment/common are "live" and the user is expected to change them,
the user will be unpleasantly surprised when their changes magically disappear (after reboot,
after frontend crash, after run restart if experiment requires restarting some frontends
before starting a new run).

This change will also break some experiments that rely in things like specifying
event buffer names through ODB. But experiments can adapt and specify buffer names
through command line switch instead of ODB.

This new way also it makes the "live" Common/Period unusable. Sure I can speed up or slow
down a frontend even during the run, but if my change does not "stick", what good is it?

Personally, I think there is no easy solution for all these troubles.

I would advocate the following approach:

- think of MIDAS as a "mature" system,
- treasure backward compatibility
- (if we must break backward compatibility to introduce a new "must have" improvement, so be it)
- document how things work. if it is clearly written down what different fields in "common" do, fewer people 
"get burned" by unexpected or illogical things. (and any non-trivial system has plenty of those).

Going back to ODB equipment/common, my experience with midas and odb tells me
that one should avoid mixing together ODB entries set by user and ODB entries set by code.

For example, separating them as equipment/settings and equipment/variables works well. Mixing
them as in equipment/common and sequencer/state causes trouble.

So perhaps we should split Equipment/common into two pieces, user settable fields like
"Period" and "event buffer name" would move to equipment/settings or whatever.

This will open the discussion of which items in equipment/common should be user settable,
and some people would want event buffer specified in the code to prevail, while other
people would want the name from odb to prevail, and both are valid but conflicting preferences.

Or we could bite the bullet and say, equipment/common is controlled by the frontend code,
the user should not change it. (and mark it read-only in ODB).

For all the pain this may cause, at least this will make it self-consistent.

Per this proposal, in addition to Stefan's change, the hotlink on equipment/common goes away,
"period" is no longer "live" and the whole subdirectory is made "read-only".

K.O.
       Reply  27 Nov 2020, Stefan Ritt, Info, Equipment "common" settings in ODB 
Ok, so what about the following proposal:

- I change back the mfe.cxx code to behave like before (ODB has precedence and does not get overwritten when the 
front-end restarts)

- I add a global flag

BOOL equipment_common_overwrite;

and pre-set it to FALSE;

- So if nothing is changed the flag stays false and ODB keeps precedence

- If a frontend wants to overwrite equipment/common on each start, the user sets

BOOL equipment_common_overwrite = TRUE;

near the equipment[] structure in the front-end code. 

- If the flag is true, the mfe.cxx init code copies the equipment[] structure to the ODB on each frontend start

I believe this way we can keep backward compatibility, and add the new way with minimal effort. The only downside 
is that all frontends on this plane have to add at least "BOOL equipment_common_overwrite = FALSE;" in their 
code.

I know global variables are evil, but this way the user can just add the line above to the equipment[] array, so 
one sees this when one edits the equipment[] array, giving motivation to change as needed. So the code would be



BOOL equipment_common_overwrite = TRUE;

EQUIPMENT equipment[] = {
 ....
}



An alternative way would be to add a function

  set_equipment_common_overwrite(TRUE);

into the frontend_init() code. That's somehow cleaner (still needs an internal global variable), but it has to go 
into frontend_init() so won't be at the same place as the EQUIPMENT list in the frontend.

Thoughts?

Best,
Stefan
          Reply  27 Nov 2020, Konstantin Olchanski, Info, Equipment "common" settings in ODB 
Yes, I think this will work.

For old mfe.c frontends, global variable set to "do it the new way" should be okey,
new experiments will have it the new way. Old experiments, will be forced to add a one-line definition
of this global variable (otherwise mfe.o will not link), at that time they get to chose "new way" or "old way".

For the new TMFE c++ frontend, this will work naturally when they create the Equipment Common object,
in the object constructor, you can see how it explicitly honors or overwrites the ODB common entries.

The TMFE frontend does not do a live "period", so there should be no issue with that.

Should I open a bitbucket issue "update TMFE frontend to new Equipment/Common scheme", to make sure
I do not forget about it?

K.O.
             Reply  30 Nov 2020, Stefan Ritt, Info, Equipment "common" settings in ODB 
Ok, I implemented it the following way:

- Added a boolean flag "equipment_common_overwrite", which must be contained in EACH frontend, preferably just 
before the EQUIPMENT structure, such as:

BOOL equipment_common_overwrite = TRUE;

EQUIPMENT equipment[] = {
...
};

- If that flag is TRUE, then the contents of the "equipment" structure is copied to the ODB on each start of the 
front-end

- If the flag is FALSE, then the ODB values are kept on the start of the front-end

The setting of the flag depends now on the philosophy of the experiment. Some experiments say that everything 
needed should be in the front-end code, so when it starts everything gets set correctly. They don't change the 
values in the ODB, but in the frontend code, which then goes into their repository. Other experiments just need 
some default values from the frontend code, and the fine-tune things by changing values in the ODB. These 
experiments should set this flag to FALSE.

*****

Please note that EVERY frontend now needs this flag, so all of you have to add it to all of your front-ends, 
otherwise the front-end will not compile! I could not figure out how to this could be done without this 
requirement, since you can define a global variable only once.

*****


Stefan
                Reply  30 Nov 2020, Stefan Ritt, Info, Equipment "common" settings in ODB 
One more change: 

After using the new code for some hours, we realized that the "enabled" flag should not come from the frontend code, 
but always be defined by the ODB. So if you quickly have to disable some equipment because the associated hardware is 
off, you want to change this flag only in the ODB and not have to recompile the frontend. So we exclude that flag from 
being set by the frontend. It is anyhow special, because one sees all disable equipment in the main midas status page, 
so one knows what's on and what's off.

Please comment here if you think that change causes problem. Anyhow it's working now for the enabled flag as before 
all these changes.

Stefan
                   Reply  30 Nov 2020, Konstantin Olchanski, Info, Equipment "common" settings in ODB 
> One more change: 
> 
> After using the new code for some hours, we realized that the "enabled" flag should not come from the frontend code, 
> but always be defined by the ODB. So if you quickly have to disable some equipment because the associated hardware is 
> off, you want to change this flag only in the ODB and not have to recompile the frontend. So we exclude that flag from 
> being set by the frontend. It is anyhow special, because one sees all disable equipment in the main midas status page, 
> so one knows what's on and what's off.
> 
> Please comment here if you think that change causes problem. Anyhow it's working now for the enabled flag as before 
> all these changes.
> 

Good catch. I still think this is fundamentally impossible to "get right". But good, you
are now in the same boat with me. The documentation will read: "if flag is TRUE, these data fields
are read from ODB, if flag is FALSE, those other fields are read from ODB". I will have to check
how this will work out for the TMFE C++ frontend (I think both mfe.c and TMFE frontends should
work "the same").

I think we have at least one month to play with this, I do not think we can do the next release
of midas until January.

K.O.
Entry  06 Nov 2020, Alexandr Kozlinskiy, Suggestion, cmake build fixes 
hi,

there are several problems with current cmake build files in midas:
- not all systems have cuda libs in /usr/local/cuda
- not all cmake version like when redefining vars
  (i.e. redefining ROOT_CXX_FLAGS)
- c++ standard not matching the one used to build ROOT
- ROOTSYS is not needed to find ROOT (it is enough to have root in PATH)

I have posted pull request 'https://bitbucket.org/tmidas/midas/pull-requests/17'
which tries to fix some of the problems.
Tests and comments are welcome.
    Reply  27 Nov 2020, Konstantin Olchanski, Suggestion, cmake build fixes 
Hi, Alexandr, thank you for making improvements to MIDAS. I have some question
about your suggestions:

> there are several problems with current cmake build files in midas:
> - not all systems have cuda libs in /usr/local/cuda
> - not all cmake version like when redefining vars

we do not see these problems with the normal cmake on our current linux systems,
centos-7 and -8, Ubuntu LTS 18.04, 20.04.

so you have something different? can you be a bit more specific,
which version of cmake and which OS you have so see these troubles?

> - c++ standard not matching the one used to build ROOT
> - ROOTSYS is not needed to find ROOT (it is enough to have root in PATH)

Again ROOT tangles with the build of MIDAS.

MIDAS does not use ROOT. As a convenience to the users, we have a "ROOT output" driver
in mlogger and we build a special executable rmlogger with ROOT. Only this special
executable should be linked with ROOT and compiled with ROOT-specific flags.

The rest of the MIDAS build should not be affected by presence or absence of ROOT.

One would have to read old messages on this forum to understand this situation.

> 
> I have posted pull request 'https://bitbucket.org/tmidas/midas/pull-requests/17'
> which tries to fix some of the problems.
> Tests and comments are welcome.
>

I look at the diffs:

- CUDA detection is changed to "find_package(CUDA)". This code was added by Joseph and Ben, and there 
must be a reason why they did not use find_package(CUDA). They will have to sign-off on this change.

- ROOT related logic assumes that all of MIDAS will be built "the ROOT way". CFLAGS are changed, the C++ 
standard is changed, etc. this assumption is wrong. only rmlogger and rmana should be built "with ROOT".

If you want to follow through on this, I suggest that you split the pull request into two,
one pull request for the CUDA changes and one pull request for the ROOT changes. Also rework
your ROOT changes as I explained above (but also read all ROOT-related messages on this forum).

K.O.
Entry  05 Nov 2020, Isaac Labrie Boulay, Forum, Building an experiment using CAEN VME interface - unknown type name 'VARIANT_BOOL' 
Hi everyone,

I have been building an experiment using the v1718 CAEN interface to talk to my modules and I am using the CAENVMElib Linux Library (2.50). I've managed to deal with data type issues by including additional libraries to my driver code but there is one type error that persists:


In file included from /usr/include/CAENVMElib.h:27:0,
             	from include/v1718.h:25,
             	from v1718.c:26:
/usr/include/CAENVMEtypes.h:323:9: error: unknown type name ‘VARIANT_BOOL’
     	CAEN_BOOL cvDS0;  	/* Data Strobe 0 signal                     	*/


The header file used to defined the CAEN types (CAENVMEtypes.h) defines 'CAEN_BOOL' like this:


#ifdef LINUX
#define CAEN_BYTE   	unsigned char
#define CAEN_BOOL   	int
#else
#define CAEN_BYTE   	byte
#define CAEN_BOOL   	VARIANT_BOOL
#endif


Has anyone ever ran into that problem when setting up an experiment using the CAEN standard?

Thanks for your help.

Isaac
    Reply  05 Nov 2020, Pierre-Andre Amaudruz, Forum, Building an experiment using CAEN VME interface - unknown type name 'VARIANT_BOOL' 
Hi,

You're building under Linux like. You want to define the LINUX and skip the VARIANT_BOOL all together.
PAA

> Hi everyone,
> 
> I have been building an experiment using the v1718 CAEN interface to talk to my modules and I am using the CAENVMElib Linux Library (2.50). I've managed to deal with data type issues by including additional libraries to my driver code but there is one type error 
that persists:
> 
> 
> In file included from /usr/include/CAENVMElib.h:27:0,
>              	from include/v1718.h:25,
>              	from v1718.c:26:
> /usr/include/CAENVMEtypes.h:323:9: error: unknown type name ‘VARIANT_BOOL’
>      	CAEN_BOOL cvDS0;  	/* Data Strobe 0 signal                     	*/
> 
> 
> The header file used to defined the CAEN types (CAENVMEtypes.h) defines 'CAEN_BOOL' like this:
> 
> 
> #ifdef LINUX
> #define CAEN_BYTE   	unsigned char
> #define CAEN_BOOL   	int
> #else
> #define CAEN_BYTE   	byte
> #define CAEN_BOOL   	VARIANT_BOOL
> #endif
> 
> 
> Has anyone ever ran into that problem when setting up an experiment using the CAEN standard?
> 
> Thanks for your help.
> 
> Isaac
       Reply  06 Nov 2020, Isaac Labrie Boulay, Forum, Building an experiment using CAEN VME interface - unknown type name 'VARIANT_BOOL' 
Yes, you are right. That fixed it and my frontend is compiling.

Thanks Pierre-Andre.

Isaac


> Hi,
> 
> You're building under Linux like. You want to define the LINUX and skip the VARIANT_BOOL all together.
> PAA
> 
> > Hi everyone,
> > 
> > I have been building an experiment using the v1718 CAEN interface to talk to my modules and I am using the CAENVMElib Linux Library (2.50). I've managed to deal with data type issues by including additional libraries to my driver code but there is one type error 
> that persists:
> > 
> > 
> > In file included from /usr/include/CAENVMElib.h:27:0,
> >              	from include/v1718.h:25,
> >              	from v1718.c:26:
> > /usr/include/CAENVMEtypes.h:323:9: error: unknown type name ‘VARIANT_BOOL’
> >      	CAEN_BOOL cvDS0;  	/* Data Strobe 0 signal                     	*/
> > 
> > 
> > The header file used to defined the CAEN types (CAENVMEtypes.h) defines 'CAEN_BOOL' like this:
> > 
> > 
> > #ifdef LINUX
> > #define CAEN_BYTE   	unsigned char
> > #define CAEN_BOOL   	int
> > #else
> > #define CAEN_BYTE   	byte
> > #define CAEN_BOOL   	VARIANT_BOOL
> > #endif
> > 
> > 
> > Has anyone ever ran into that problem when setting up an experiment using the CAEN standard?
> > 
> > Thanks for your help.
> > 
> > Isaac
    Reply  27 Nov 2020, Konstantin Olchanski, Forum, Building an experiment using CAEN VME interface - unknown type name 'VARIANT_BOOL' 
> 
> The header file used to defined the CAEN types (CAENVMEtypes.h) defines 'CAEN_BOOL' like this:
> 
> 
> #ifdef LINUX
> #define CAEN_BYTE   	unsigned char
> #define CAEN_BOOL   	int
> #else
> #define CAEN_BYTE   	byte
> #define CAEN_BOOL   	VARIANT_BOOL
> #endif
> 

Complain to CAEN.

The year is 2020 and they should use standard C/C++ data types from stdint.h (uint32_t, etc).

K.O.
Entry  19 Nov 2020, Joseph McKenna, Forum, History plot consuming too much memory 

A user reported an issue that if they were to plot some history data from 
2019 (a range of one day), the plot would spend ~4 minutes loading then 
crash the browser tab. This seems to effect chrome (under default settings) 
and not firefox

I can reproduce the issue, "Data Being Loaded" shows, then the page and 
canvas loads, then all variables get a correct "last data" timestamp, then 
the 'Updating data ...' status shows... then the tab crashes (chrome)


It seems that the browser is loading all data until the present day (maybe 4 
Gb of data in this case). In chrome the tab then crashes. In firefox, I do 
not suffer the same crash, but I can see the single tab is using ~3.5 Gb of 
RAM

Tested with midas-2020-08-a up until the HEAD of develop

I could propose the user use firefox, or increase the memory limit in 
chrome, however are there plans to limit the data loaded when specifically 
plotting between two dates?
    Reply  19 Nov 2020, Stefan Ritt, Forum, History plot consuming too much memory 
The history code is right now programmes in such a way that when you request
an old time window, then all data from that window until the present date
gets loaded. When we implemented that, this worked fine for data ranges of 
several years with a delay of just a few seconds. Of course one can only
load that specific window, but when the user then scrolls right, one has to
append new data to the "right side" of the array stored in the browser. If the
user jumps to another location, then the browser has to keep track of which 
windows are loaded and which windows not, making the history code much more 
complicated. Therefore I'm only willing to spend a few days of solid work
if this really becomes a problem. 

Are you sure that the delay comes from the browser or actually from mhttpd
digging through GBytes of history data? I realized that you need solid state
disks to get a real quick response.

Stefan
       Reply  20 Nov 2020, Joseph McKenna, Forum, History plot consuming too much memory 
Poking at the behavior of this, its fairly clear the slow response is from the data 
being loaded off an HDD, when we upgrade this system we will allocate enough SSD 
storage for the histories.

Using Firefox has resolved this issue for the user's project here

Taking this down a tangent, I have a mild concern that a user could temporarily 
flood our gigabit network if we do have faster disks to read the history data. Have 
there been any plans or thoughts on limiting the bandwidth users can pull from 
mhttpd? I do not see this as a critical item as I can plan the future network 
infrastructure at the same time as the next system upgrade (putting critical data 
taking traffic on a separate physical network).

> Of course one can only
> load that specific window, but when the user then scrolls right, one has to
> append new data to the "right side" of the array stored in the browser. If the
> user jumps to another location, then the browser has to keep track of which 
> windows are loaded and which windows not, making the history code much more 
> complicated. Therefore I'm only willing to spend a few days of solid work
> if this really becomes a problem. 

For now the user here has retrieved all the data they need, and I can direct others 
towards mhist in the near future. Being able to load just a specific window would be 
very useful in the future, but I comprehend how it would be a spike in complexity.
          Reply  20 Nov 2020, Stefan Ritt, Forum, History plot consuming too much memory 
 > Taking this down a tangent, I have a mild concern that a user could temporarily 
> flood our gigabit network if we do have faster disks to read the history data. Have 
> there been any plans or thoughts on limiting the bandwidth users can pull from 
> mhttpd?

I guess this will not be network limiting but CPU limiting of the mhttpd process. But I'm 
not 100% sure, depends on the actual hardware. But even if we improve the history 
retrieval to "window only", the user could request all data form 2010 to 2020. So one 
would need some code which estimates the amount of data, then tell the user "do you really 
want that?". But still, a novice user can simply click "yes" without much of a thought. So 
in conclusion I believe proper user training is better than software limits. Like the 
other guy "I did 'rm -rf /', and now nothing works any more, can you help?".

Stefan
          Reply  27 Nov 2020, Konstantin Olchanski, Forum, History plot consuming too much memory 
>
> Taking this down a tangent, I have a mild concern that a user could temporarily 
> flood our gigabit network if we do have faster disks to read the history data.
>

By my measurements, right now our javascript code can reach 30-50-70% of Gige ethernet
bandwidth, so, no, we cannot flood the network just by making history plots.

(we cannot reach 100% because javascript code is not multithreaded,
it cycles through "request new data" and "decode javascript, make plot" states,
and the network is idle in this second state).

>
> Have there been any plans or thoughts on limiting the bandwidth users can pull from 
> mhttpd?
>

10gige networking is here (and 5 and 2.5 Gige, too). I would not worry too much
about saturating 1gige network interfaces.

>
> I do not see this as a critical item as I can plan the future network 
> infrastructure at the same time as the next system upgrade (putting critical data 
> taking traffic on a separate physical network).
>

10gige network between all computers, everything on SSD ZFS arrays, except
bulk data on ZFS HDD arrays (only for cost reasons $$$/TB).

K.O.
       Reply  27 Nov 2020, Konstantin Olchanski, Forum, History plot consuming too much memory 
> 
> Are you sure that the delay comes from the browser or actually from mhttpd
> digging through GBytes of history data?
>

I think we will need to address this question "head-on". The history plot
will need to display the following information:

"time to load data from disk: N seconds, time to transfer data to javascript: M 
seconds, time to make the plot: Q seconds".

The second and third items are already available, the first one will need
to be computed in mhttpd and passed to javascript.

K.O.
    Reply  27 Nov 2020, Konstantin Olchanski, Forum, History plot consuming too much memory 
> 
> Tested with midas-2020-08-a up until the HEAD of develop
> 

Just so you know, it took myself and Stefan quite a bit of effort
to improve memory and data handling in the new history plots
to be able to plot 1 year of data without bogging down too much. I got
to learn the google-chrome javascript cpu profiler, memory profiler
and the intricacies of javascript shift() and unshift() operators.

Before midas-2020-08-a, pressing the zoom-out button you would never
reach the javascript memory limit, the code would go into "100% cpu use"
and the browser tab will become progressively unresponsive well before
running out of memory. With the original code, our alpha-g history plots
could go back a few weeks at most, with the current code, we can go back
about 11 months. Compared to the old "C" history plots that can
do "last 10 years", no problem.

Loading all the history data into the browser is a design choice.

It has benefits and downsides.

The main benefit is that looking at immediate live data is much easier.

The main downside is that "plot last 10 years" becomes impossible.

As they say "appetite comes during eating", we have learned about these
downsides as we developed the new system. When we started, we did not
know much about javascript memory limits, cpu limits, etc. We did learn
a lot, though.

With the current code, we are limited to loading history data up to 50% of
the javascript memory limit. I know how to change the code to get up to 100%,
but I think it is not worth it, it still does not get as to plot "last 10 year".

We think the solution to recovering "last 10 years" capability is to use
binned data (which the history system can already deliver to javascript).
With binned data, the data volume in Mbytes remains constant, javascript
memory use has an upper-bound (we never use more memory than X Mbytes)
and data movement over the network is reduced.

Another way to look at this - typical display has only 1000-4000 vertical pixels,
it cannot physically display a bigger number of data points (no more
then 1 data point per pixel). So why load 1000000 data points when we only
can plot 1000-4000 of them?

So all the infrastructure for plotting binned data is already there,
but the javascript code still needs to be written. I think the biggest
challenge will be in blending or combining binned and unbinned data
on the same plot or in seamlessly switching the plot between binned and
unbinned data.

K.O.
       Reply  27 Nov 2020, Konstantin Olchanski, Forum, History plot consuming too much memory 
>
> With the current code, we are limited to loading history data up to 50% of
> the javascript memory limit.
>

The javascript memory limit itself seems to be a moving target. (google javascript 
memory limit, and good luck!).

Historically, javascript did not have any memory or cpu use limits, but with
the raise of abusive web sites, bitcoin miners, etc, I see browsers clamp down
on allowed/allocated CPU use (inactive tabs are throttled down). memory use
is already clamped down severely, on a 64 GB computer, a browser tab
can only allocate a handful of GBs.

This throttling of browser tabs is already intrusive enough that we need
to be careful in programming midas web pages. for examples throttled events
are not firing at the same rate or in the same order as in active tabs.

One logical conclusion of these restrictions could be that, eventually,
google-chrome permits only just enough cpu and memory to run gmail.

K.O.
Entry  13 Oct 2020, Soichiro Kuribayashi, Info, About remote control of front end part of MIDAS on chip 
Hello!

My name is Soichiro Kuribayashi and I am a Ph.D. student at Kyoto University. 
I'm a T2K collaborator and working for Super FGD which is new detector in ND280.

I'm a beginner of MIDAS and I've just started to develop the DAQ software with 
MIDAS for Super FGD.
For the DAQ of Super FGD, we will run remotely front end part of MIDAS on ZYNQ 
which is system on chip.

For this remote control of front end part with mserver, we have to mount home 
directory of DAQ PC(Cent OS8) on that of Linux on ZYNQ.
So I wonder if we should use NFS(Network file system) + NIS(Network information 
service) + autofs for the mounting. Is it correct?

If you have any information or any suggestion for the remote control on chip, 
please let me know.

Best regards,
Soichiro 
    Reply  13 Oct 2020, Konstantin Olchanski, Info, About remote control of front end part of MIDAS on chip 
> My name is Soichiro Kuribayashi and I am a Ph.D. student at Kyoto University. 
> I'm a T2K collaborator and working for Super FGD which is new detector in ND280.

Hi! I did much of the DAQ software for the original FGD. I hope I can help.

> For the DAQ of Super FGD, we will run remotely front end part of MIDAS on ZYNQ 
> which is system on chip.

This would be the same as the existing FGD. Inside the FGD DCC is a Virtex4 FPGA
with a 300MHz PPC CPU running Linux from a CompactFlash card (Kentaro-san did this 
part). On this linux system runs the FGD DCC midas frontend. It connects
to the FGD midas instance using the mserver. This frontend executable is
copied to the DCC using "scp", there is no common nfs mounted home directory.

> For this remote control of front end part with mserver, we have to mount home 
> directory of DAQ PC(Cent OS8) on that of Linux on ZYNQ.
> So I wonder if we should use NFS(Network file system) + NIS(Network information 
> service) + autofs for the mounting. Is it correct?

Since you have a bigger SOC and you can run pretty much a complete linux,
I do recommend that you go this route. During development it is very convenient
to have common home directories on the main machine and on the frontend fpga
machines.

But this is not necessary. the midas mserver connection does not require
common (nfs-mounted) home directory, you can copy the files to the frontend
fpga using scp and rsync and you can use the gdb "remote debugger" function.

I can also suggest that on your frontend SOC/FPGA machine, you boot linux
using the "nfs-root" method. This way, the local flash memory only
contains a boot loader (and maybe the linux kernel image, depending on
bootloader limitations). The rest of the linux rootfs can be on your
central development machine. This way management of flash cards,
confusion with different contents of local flash and need to make backups
of frontend machines is much reduced.

If you use a fast SSD and ZFS with deduplication, you will also have good
performance gain (NFS over 1gige network to server with fast SSD works
so much better compared to the very slow SD/MMC/NAND flash).

I can point you to some of my documentation how we do this.

>
> If you have any information or any suggestion for the remote control on chip, 
> please let me know.
> 

I would say you are on a good track. For early development on just one board,
pretty much any way you do it will work, but once you start scaling up
beyound 3-4-5 frontends, you will start seeing benefits from common NFS-mounted
home directories, NFS-root booted linux, etc.

And of course you may want to study the existing ND280/FGD DAQ. I hope you
have access to the running system at Jparc. If not, I have a copy of
pretty much everything (except for running hardware, it is stored in the basement, 
dead) and I can give you access.

P.S. This reminds me that the cascade software from ND280 (they key part
for connecting the FGD, the TPC, the slow controls & etc into one experiment)
was never merged into the midas repository. I opened a ticket for this,
now we will not forget again:

https://bitbucket.org/tmidas/midas/issues/291/import-cascase-frontend-from-t2k-
nd280-fgd

K.O.
       Reply  13 Oct 2020, Soichiro Kuribayashi, Info, About remote control of front end part of MIDAS on chip 
Dear Konstantin,

Thank you very much for your reply and detailed information.
I would appreciate if you could help us.

> I can also suggest that on your frontend SOC/FPGA machine, you boot linux
> using the "nfs-root" method. This way, the local flash memory only
> contains a boot loader (and maybe the linux kernel image, depending on
> bootloader limitations). The rest of the linux rootfs can be on your
> central development machine. This way management of flash cards,
> confusion with different contents of local flash and need to make backups
> of frontend machines is much reduced.

As you said, we can run complete Linux (Ubuntu 16) on ZYNQ and I'm using common NFS 
system now. However, I didn't know "nfs-root" method which you mentioned and this method 
seems to be reasonable way to just share linux rootfs.
First of all, I will try this method for simpler system.

> If you use a fast SSD and ZFS with deduplication, you will also have good
> performance gain (NFS over 1gige network to server with fast SSD works
> so much better compared to the very slow SD/MMC/NAND flash).
>
> I can point you to some of my documentation how we do this.

I'm concerned about such performance and I have checked the performance with common NFS 
over gige network and my DAQ PC roughly(data transfer rate ~ O(10) MByte/sec). However, I 
didn't know the ZFS and also how we can have performance gain with a fast SSD and ZFS.
Please let me know your documentation how to do it if possible.

> I would say you are on a good track. For early development on just one board,
> pretty much any way you do it will work, but once you start scaling up
> beyound 3-4-5 frontends, you will start seeing benefits from common NFS-mounted
> home directories, NFS-root booted linux, etc.

I'm developing with just one board and common NFS-mounted now. I'm looking forward to 
seeing such benefits when I will use multiple frontends.
 
> And of course you may want to study the existing ND280/FGD DAQ. I hope you
> have access to the running system at Jparc. If not, I have a copy of
> pretty much everything (except for running hardware, it is stored in the basement, 
> dead) and I can give you access.

I don't have access to the system at Jparc, but Nick has told us where FGD DAQ code is.
Is bellow URL everything of code of FGD DAQ?
https://git.t2k.org/hastings/fgddaq/-/tree/master

Best regards,
Soichiro
          Reply  20 Oct 2020, Stefan Ritt, Info, About remote control of front end part of MIDAS on chip 
We also use a Zynq chip and boot in the following order:

1. SD Card
   a. First Stage Bootloader
   b. PL Firmware
   c. UBOOT
2. NFS over Ethernet
   a. Linux kernel
   b. RootFS
   c. Mounting home directories


If you need details I can bring you in contact with the person who actually implemented that.

Best,
Stefan
             Reply  21 Oct 2020, Soichiro Kuribayashi, Info, About remote control of front end part of MIDAS on chip 
Dear Stefan,

Thank you very much for your help.

I have already contacted someone who has used ZYNQ in that order and It's working fine for now.
But, I'll let you know if something goes wrong.

Best regards,
Soichiro 
Entry  29 Sep 2020, Amy Roberts, Forum, using python client to start and stop run 
I'm using a python client to start and stop runs, and the following code *appears* 
to set the MIDAS state to "Run"

client.odb_set("/Runinfo/State", 3)

However, it doesn't seem to do other things associated with a run, like start 
accumulating events.

Is there a different way I should start the run from the python client?

Thanks!
    Reply  29 Sep 2020, Ben Smith, Forum, using python client to start and stop run 
The ODB variable "/Runinfo/State" is a symptom of starting/stopping a run, rather than the cause.

In C++, one uses `cm_transition()` to start/stop runs.

In python code you can use the `start_run()` and `stop_run()` functions from `midas.client`: https://bitbucket.org/tmidas/midas/src/00ff089a836100186e9b26b9ca92623e672f0030/python/midas/client.py#lines-793:808
       Reply  06 Oct 2020, Konstantin Olchanski, Forum, using python client to start and stop run 
> The ODB variable "/Runinfo/State" is a symptom of starting/stopping a run, rather than the cause.
> 
> In C++, one uses `cm_transition()` to start/stop runs.
> 
> In python code you can use the `start_run()` and `stop_run()` functions from `midas.client`: https://bitbucket.org/tmidas/midas/src/00ff089a836100186e9b26b9ca92623e672f0030/python/midas/client.py#lines-793:808

one can also run an external command: "mtransition START" and "mtransition STOP"

K.O.
Entry  02 Sep 2020, Ruslan Podviianiuk, Forum, Transition status message issue.png
Hello,

I got an error after start of run and it would be good to show this error (or 
errors) in UI that I am developing. I see this error in the Transition 
directory (please see the attached file). Is it possible to read the status 
message and error messages from the Transition directory using jsonrpc? If yes, 
could you please explain me how to do this.

Thank you.
Ruslan  
    Reply  02 Sep 2020, Ben Smith, Forum, Transition status message 
The information you want is in the ODB:
* "/System/Transition/status" is the overall integer status code.
* "/System/Transition/error" is the overall error message string.

There is also per-client status information in the ODB:
* "/System/Transition/Clients/<client_name>/status"
* "/System/Transition/Clients/<client_name>/error"
       Reply  02 Sep 2020, Ruslan Podviianiuk, Forum, Transition status message 
> The information you want is in the ODB:
> * "/System/Transition/status" is the overall integer status code.
> * "/System/Transition/error" is the overall error message string.
> 
> There is also per-client status information in the ODB:
> * "/System/Transition/Clients/<client_name>/status"
> * "/System/Transition/Clients/<client_name>/error"


Thank you so much, Ben!
          Reply  08 Sep 2020, Konstantin Olchanski, Forum, Transition status message 
> > The information you want is in the ODB:
> > * "/System/Transition/status" is the overall integer status code.
> > * "/System/Transition/error" is the overall error message string.
> > 
> > There is also per-client status information in the ODB:
> > * "/System/Transition/Clients/<client_name>/status"
> > * "/System/Transition/Clients/<client_name>/error"

You can also use web page .../resources/transition.html as an example of how
to read transition (and other) data from ODB into your own web page. example.html
may also be helpful.

K.O.
             Reply  08 Sep 2020, Ruslan Podviianiuk, Forum, Transition status message 
> > > The information you want is in the ODB:
> > > * "/System/Transition/status" is the overall integer status code.
> > > * "/System/Transition/error" is the overall error message string.
> > > 
> > > There is also per-client status information in the ODB:
> > > * "/System/Transition/Clients/<client_name>/status"
> > > * "/System/Transition/Clients/<client_name>/error"
> 
> You can also use web page .../resources/transition.html as an example of how
> to read transition (and other) data from ODB into your own web page. example.html
> may also be helpful.
> 
> K.O.

Thank you Konstantin!

Ruslan
Entry  08 Sep 2020, Zaher Salman, Forum, json parser error 
I am getting the following error alert in a custom page whenever a run starts

json parser exception: SyntaxError: Unexpected token < in JSON at position 985, batch request: method: "db_get_values", params: [object Object], id: 1598691925697 method: "get_alarms", params: null, id: 1598691925697 method: "cm_msg_retrieve", params: [object Object], id: 1598691925697 method: "cm_msg_retrieve", params: [object Object], id: 1598691925697

Does anyone know why and what causes this? This does not affect anything and things seem to continue running fine.

thanks.
    Reply  08 Sep 2020, Konstantin Olchanski, Forum, json parser error 
> I am getting the following error alert in a custom page whenever a run starts
> json parser exception: SyntaxError: Unexpected token < in JSON at position 985, batch request: method: "db_get_values", params: [object Object], id: 1598691925697 method: "get_alarms", params: null, id: 1598691925697 method: "cm_msg_retrieve", params: [object Object], id: 1598691925697 method: "cm_msg_retrieve", params: [object Object], id: 1598691925697
> Does anyone know why and what causes this? This does not affect anything and things seem to continue running fine.

this is bug #242, https://bitbucket.org/tmidas/midas/issues/242/mjsonrpc-calls-should-return-valid-utf8

we read stuff from midas.log and push it to the web browser. we have seen this stuff
contain arbitrary binary data (both intentionally written into midas.log by cm_msg() and
file content corruption/truncation from computer crashes), the json decoder in the web browser
does not like that stuff - it is invalid utf-8 unicode - and throws an exception.

since we cannot ensure content of midas.log (and other files on disk) are always valid utf-8,
we have to sanitize it before sending it to the browser.

right now I am not sure of the best way to do this sanitizing. we do have a function to check
for valid utf-8 unicode, perhaps it should be extended to replace invalid unicode with spaces
or Xes or "?" or whatever, I am open to suggestions and ideas.

BTW, this is a new recent change to how strings generally work. C NUL-terminated strings are
permitted to contain arbitrary binary data (except for NUL char, of course). C++ std::string
are permitted to contain arbitrary binary data. but javascript strings are only permitted
to contain valid unicode, and the json standard was recently amended to require that json
strings are valid utf-8 unicode. So there is a disconnect between C/C++ code written in the
last 50 years where strings can contain binary data and the javascript world requiring
valid utf-8 unicode pretty much everywhere.

K.O.
Entry  21 Aug 2020, Ruslan Podviianiuk, Forum, time information Running_time.png
Hello,

I have a few questions about time information:
1. Is it possible to get "Running time" using, for example, jsonrpc? (please see 
the attached file)
2. Is it possible to configure "Start time" and "Stop time" with time zone? For 
example when I start a new run, value of "Start time" key is automatically changed 
to "Fri Aug 21 12:38:36 2020" without time zone. 

Thank you.
    Reply  24 Aug 2020, Stefan Ritt, Forum, time information 
> 1. Is it possible to get "Running time" using, for example, jsonrpc? (please see 
> the attached file)

You have in the ODB "/Runinfo/Start time binary" which is measured in seconds since 
1970. By subtracting this from the current time, you get the running time.

> 2. Is it possible to configure "Start time" and "Stop time" with time zone? For 
> example when I start a new run, value of "Start time" key is automatically changed 
> to "Fri Aug 21 12:38:36 2020" without time zone. 

"Start time binary" and "Stop time binary" are in seconds since the 1970 in UTC, so no 
time zone involved there. The ASCII versions of the start/stop time are derived from 
the binary time using the server's local time zone. If you want to display them in a 
different time zone, you have to create a custom page and convert it to another time 
zone using JavaScript like

var d = new Date(start_time_binary);

Stefan
       Reply  25 Aug 2020, Ruslan Podviianiuk, Forum, time information 
Thank you, Stefan

Ruslan 



> > 1. Is it possible to get "Running time" using, for example, jsonrpc? (please see 
> > the attached file)
> 
> You have in the ODB "/Runinfo/Start time binary" which is measured in seconds since 
> 1970. By subtracting this from the current time, you get the running time.
> 
> > 2. Is it possible to configure "Start time" and "Stop time" with time zone? For 
> > example when I start a new run, value of "Start time" key is automatically changed 
> > to "Fri Aug 21 12:38:36 2020" without time zone. 
> 
> "Start time binary" and "Stop time binary" are in seconds since the 1970 in UTC, so no 
> time zone involved there. The ASCII versions of the start/stop time are derived from 
> the binary time using the server's local time zone. If you want to display them in a 
> different time zone, you have to create a custom page and convert it to another time 
> zone using JavaScript like
> 
> var d = new Date(start_time_binary);
> 
> Stefan
Entry  24 Aug 2020, Konstantin Olchanski, Release, midas-2020-12 
midas-2020-12-a is here.

new features and notable updates since midas-2020-03:

- new C++ ODB interface odbxx.h
- image history
- much improved history plots
- new sequencer pages
- UTF-8 clean ODB (complains if any TID_STRING is invalid UTF-8)
- mhttpd update to mongoose 6.16 with much improved mulththreading
- mhttpd update to use MBEDTLS in preference to problematic OpenSSL
- MidasConfig.cmake contributed by Mathieu Guigue

plans for next development: major update of mlogger to simplify channel 
configuration in odb, improvements to mhttpd multithreading, new history plot 
configuration page, more c++ification.

To obtain this release, either checkout the top of branch release/midas-2020-08 
(recommended) or checkout the tag midas-2020-08-a.

K.O.
Entry  28 Aug 2019, Nick Hastings, Forum, History plot problems for frontend with multiple indicies fedummy.cxxMakefile
Hello experts,

I have been writing a SC frontend for a powersupply. I have used the model 
where the frontend can be started with "-i n" option so that each fe can 
control a different supply. During the development/testing of the program I 
would normally only run a single instance with "-i 1". However when I started
a second instance with "-i 2" I found problems with the history plots that
were being made for the original "-i 1" instance. The variable being plotted
seemed to randomly jump between the value from the "-i 1" instance and 
the "-i 2" instance.  confirmed that the "correct" values exist for each 
frontend in the odb under /Equipment/Foo01/Variables and 
/Equipment/Foo02/Variables

This is also not just a plotting artifact since I was also
able to see the two different values by running mhist.

I saw this behaviour using midas-2019-03 and also the head of the development
branch (686e4de2b55023b0d1936c60bcf4767c5e6caac0 from just under 48 hours ago). 

I was able to reproduce this with a stripped down frontend that just 
sets a variable that is equal to its frontend_index. Please find the code 
and Makefile attached. Presumably I've done something wrong in my 
implementation that hopefully a more experienced person can spot quite 
quickly, but please let me know if any more information is needed.

I have seen this behaviour on both Debian 10 and on a CentOS 7 Singularity 
image running on top of Debian 10.

Thanks,

Nick.

P.S. I made the topic of this post "Forum" and not "Bug Report" since I
expect the root of this problem is somewhere between the keyboard and chair.
    Reply  28 Aug 2019, Stefan Ritt, Forum, History plot problems for frontend with multiple indicies 
My first question would be why are you using several font-ends at all? That makes things more 
complicated than needed. In the normal FE framework, you can define either several equipment 
served by one frontend, or even one equipment linked to several devices. In the MEG experiment 
we have one slow control frontend controlling ~100 devices without problem. In the old days there 
was a problem that some slow devices could throttle the readout, but since the invention of multi-
threaded slow control equipment, each device gets its own thread so they don't block each other.

Stefan
       Reply  28 Aug 2019, Nick Hastings, Forum, History plot problems for frontend with multiple indicies 
Hi Stefan,

thanks for you quick reply.

> My first question would be why are you using several font-ends at all?

Becuase I was following the model used for many of the frontends for the ND280 FGD.

> That makes things more 
> complicated than needed. In the normal FE framework, you can define either several equipment 
> served by one frontend, or even one equipment linked to several devices. In the MEG experiment 
> we have one slow control frontend controlling ~100 devices without problem. In the old days 
there 
> was a problem that some slow devices could throttle the readout, but since the invention of 
multi-
> threaded slow control equipment, each device gets its own thread so they don't block each 
other.

Perhaps things have changed in the 10 years since the FGD SC code was written. I can do it 
differently but doing it that way seemed naturual since around 90% of the frontend code that I
have see does it that way.

Nick.
          Reply  28 Aug 2019, lcp, Forum, History plot problems for frontend with multiple indicies 
hi, 

> > That makes things more 
> > complicated than needed. In the normal FE framework, you can define either several equipment 
> > served by one frontend, or even one equipment linked to several devices. In the MEG experiment 
> > we have one slow control frontend controlling ~100 devices without problem. In the old days 
> there 
> > was a problem that some slow devices could throttle the readout, but since the invention of 
> multi-
> > threaded slow control equipment, each device gets its own thread so they don't block each 
> other.
> 

I agree with Stefan, that it's probably better to run a multi-threaded setup, than individual frontends.

The only place I've ever used the frontend index on startup is when I was testing and building
an eventbuilder.

https://midas.triumf.ca/MidasWiki/index.php/Event_Builder#Example

This might explain, why your history is swapping between frontends, as in the event builder, it gets
reconstructed.

Maybe this helps...

LCP


> Perhaps things have changed in the 10 years since the FGD SC code was written. I can do it 
> differently but doing it that way seemed naturual since around 90% of the frontend code that I
> have see does it that way.
             Reply  16 Sep 2019, Konstantin Olchanski, Forum, History plot problems for frontend with multiple indicies 
> it's probably better to run a multi-threaded setup, than individual frontends.

I recommend against using multiple threads if at all possible and unless absolutely required.

Only for one reason: multithreaded c++ programs are notoriously hard to debug.

In addition, one has to face several classes of bugs absent in single-threaded applications:

a) which thread "owns" which object
b) locking of all shared data
c) huge overheads from locking at high data rates (a performance bug)
d) correct locking order, dead locks, live locks
e) incomprehensible core dumps and stack traces
f) race conditions

To control 2 power supplies, run 2 frontend programs, 1 per power supply.

To control 64 frontend cards, run 1 frontend with many threads: 64 (per device) + 1 (main thread) + 1 (RPC handler) + 1 
(watchdog) + 1 (common event generator/data transmitter) + 1 (odb/web page status update). You *will* bump into each 
and every one of the problems (a) to (f) above.

K.O.
       Reply  16 Sep 2019, Konstantin Olchanski, Forum, History plot problems for frontend with multiple indicies 
> My first question would be why are you using several font-ends at all? That makes things more 
> complicated than needed. In the normal FE framework, you can define either several equipment 
> served by one frontend, or even one equipment linked to several devices.

I am the culprit here, as I wrote the original code for T2K/ND280 that Nick is looking at.

At the time, we needed to control multiple units of identical equipment. Most of these equipments
needed to be controlled independently from each other, so we could not/did not want to use
one single frontend executable to control all of them at the same time. For example, for equipment
not in use, we can stop the corresponding frontend. In case of trouble, we can restart
the corresponding frontend without disrupting the frontends for the other equipments.

The successful operation of the T2K/ND280 experiment is sufficient defence for the validity of this approach.

One lesson learned was that the MIDAS frontend framework did not make it easy to have multiple identical frontends 
for controlling multiple identical equipments. (typical use is control of 2-3 Wiener power supplies, 1-2-3 UPS 
devices, etc). At the time (and today), only the "i NNN" flag is available to tell the frontend "who am I?". To make it 
work, one has to use the hard to "%02d" stuff in the equipment name, and there are other complications. For my 
"next generation" of frontends, I tried to specialize the frontend executables at compile time using C/C++ 
preprocessor defines (-Dwiener01, -Dwiener02, etc), this worked better, but still not super happy.

My current solution as implemented by the tmfe frontend framework is to give the user full control
over the command line arguments (mfe.c did not permit any "user arguments" and did not allow
access to argc/argv) and full control over the equipment names (mfe.c equipment names are fixed at compile time).

K.O.
    Reply  29 Aug 2019, Ben Smith, Forum, History plot problems for frontend with multiple indicies 
Hi Nick,

I confirm that this issue appears when using the MIDAS history driver. The issue does not appear when using the MYSQL history driver.

One solution is to give each frontend instance a different Event ID (see example code below for doing this in frontend_init). The history system did still seem to be confused by the existing FeDummy equipments/events even after making this change. However, after changing EQ_NAME from FeDummy to FeDum (i.e. starting from a clean state history-wise) things behaved normally.

I will note that some experiments definitely have a need for the "-i" method, especially those that run on distributed clusters.

Ben


```
INT frontend_init()
{
   sprintf(eq_name, "%s%02d", EQ_NAME, get_frontend_index());

   // Ensure each FE gets a different Event ID in the history system (951, 952 etc)
   char keyname[128];
   HNDLE hkey;
   int status;
   sprintf(keyname, "/Equipment/%s/Common/Event ID", eq_name);
   status = db_find_key(hDB, 0, keyname, &hkey);
   if (status != DB_SUCCESS) abort();

   WORD new_ev_id = 950 + get_frontend_index();
   status = db_set_data_index(hDB, hkey, &new_ev_id, 2, 0, TID_WORD);
   if (status != DB_SUCCESS) abort();
   return SUCCESS;
}
```
       Reply  01 Sep 2019, Nick Hastings, Forum, History plot problems for frontend with multiple indicies 
Hi Ben,

thanks for your reply. I can confirm that your suggested workaround does indeed
make the problem dissapear.

I guess this issue hasn't been seen at T2K since we use MYSQL for the history.

Thanks,

Nick.
          Reply  16 Sep 2019, Konstantin Olchanski, Forum, History plot problems for frontend with multiple indicies 
> thanks for your reply. I can confirm that your suggested workaround does indeed
> make the problem dissapear.
> I guess this issue hasn't been seen at T2K since we use MYSQL for the history.

I think you found the source of the problem, confused event id assignments. To confirm,
can you email me (or post here) the output of odbedit "ls -l /History/Events".

If that's the problem, you can avoid it completely by switching to a history storage method
that does not rely on magic mapping between equipment names and numeric event id's:
try the "FILE" method (set odb /Logger/History/FILE/Active to "y", restart the logger) or
the "MYSQL" method (you will need to setup a mysql database). You tell mhttpd and mhist which 
history data to read by setting ODB /History/LoggerHistoryChannel to one of the channel names 
from /logger/history/, restart mhttpd. (mhttpd and mhist used to print a message "reading history 
data from channel XXX", but somebody removed this message).

K.O.
             Reply  16 Sep 2019, Nick Hastings, Forum, History plot problems for frontend with multiple indicies 
Hi Konstantin,

thanks for your reply.

> > thanks for your reply. I can confirm that your suggested workaround does indeed
> > make the problem dissapear.
> > I guess this issue hasn't been seen at T2K since we use MYSQL for the history.
> 
> I think you found the source of the problem, confused event id assignments. To confirm,
> can you email me (or post here) the output of odbedit "ls -l /History/Events".

Sorry, do you want this for after I've applied the fix suggested by Ben or with the original code 
that I posted.

With the original code it only shows one fe even though both are running:

[local:e666:S]History>ls -l /History/Events
Key name                        Type    #Val  Size  Last Opn Mode Value
---------------------------------------------------------------------------
1                               STRING  1     10    2m   0   RWD  FeDummy02
0                               STRING  1     16    2m   0   RWD  Run transitions

[local:e666:S]History> scl
Name                Host
mhttpd              localhost           
fedummy01           localhost           
fedummy02           localhost           
ODBEdit             localhost           
Logger              localhost           
[local:e666:S]History>ls "/History/Display/Default/Dummy/
Timescale                       1h
Zero ylow                       n
Show run markers                y
Show values                     y
Sort Vars                       n
Log axis                        n
Minimum                         0
Maximum                         0
Variables
                                FeDummy01:Data
                                FeDummy02:Data
Label
                                
                                
Colour
                                #00AAFF
                                #FF9000
Factor
                                0
                                0
Offset
                                0
                                0
Buttons
                                10m
                                1h
                                3h
                                12h
                                24h
                                3d
                                7d
Formula
                                
                                
Show old vars                   n

> If that's the problem, you can avoid it completely by switching to a history storage method
> that does not rely on magic mapping between equipment names and numeric event id's:
> try the "FILE" method (set odb /Logger/History/FILE/Active to "y", restart the logger) or
> the "MYSQL" method (you will need to setup a mysql database). You tell mhttpd and mhist which 
> history data to read by setting ODB /History/LoggerHistoryChannel to one of the channel names 
> from /logger/history/, restart mhttpd. (mhttpd and mhist used to print a message "reading history 
> data from channel XXX", but somebody removed this message).

Using the orginal code I posted and switching from MIDAS history to FILE history did not seem to 
change the random behaviour in the history plots.

Regards,

Nick.
                Reply  17 Sep 2019, Konstantin Olchanski, Forum, History plot problems for frontend with multiple indicies 
> [local:e666:S]History>ls -l /History/Events
> Key name                        Type    #Val  Size  Last Opn Mode Value
> ---------------------------------------------------------------------------
> 1                               STRING  1     10    2m   0   RWD  FeDummy02
> 0                               STRING  1     16    2m   0   RWD  Run transitions

Something is very broken. There should be more entries here, at least
there should be entries for "FeDummy01" and usually there is also an entry
for "FeDummy" because one invariably runs fedummy without "-i" at least once.

The fact that changing from "midas" storage to "file" storage makes no difference
also indicates that something is very broken.

I want to debug this.

Since you tried the "file" storage, can you send me the output of "ls -l mhf*.dat" in the directory
with the history files? (it should have the "*.hst" files from the "midas" storage and "mhf*.dat" files
from the "file" storage.

K.O.
                   Reply  18 Sep 2019, Nick Hastings, Forum, History plot problems for frontend with multiple indicies 
Hi Konstantin,

> > [local:e666:S]History>ls -l /History/Events
> > Key name                        Type    #Val  Size  Last Opn Mode Value
> > ---------------------------------------------------------------------------
> > 1                               STRING  1     10    2m   0   RWD  FeDummy02
> > 0                               STRING  1     16    2m   0   RWD  Run transitions
> 
> Something is very broken. There should be more entries here, at least
> there should be entries for "FeDummy01" and usually there is also an entry
> for "FeDummy" because one invariably runs fedummy without "-i" at least once.

This is a fresh experiment that I started just to test this this issue, that is why there are not many 
entries in /History/Events. I agree though that we should expect to see a FeDummy01 entry.
 
> The fact that changing from "midas" storage to "file" storage makes no difference
> also indicates that something is very broken.
> 
> I want to debug this.
> 
> Since you tried the "file" storage, can you send me the output of "ls -l mhf*.dat" in the directory
> with the history files? (it should have the "*.hst" files from the "midas" storage and "mhf*.dat" 
files
> from the "file" storage.

When I started this experiment yesterday(?) I disabled the Midas history when I enbled the file 
history. Jsut now I reenabled the Midas history, so they are currently both active.

% ls -l work/online/{*.hst,mhf*.dat}
-rw-r--r-- 1 hastings hastings  14996 Sep 17 10:21 work/online/190917.hst
-rw-r--r-- 1 hastings hastings   3292 Sep 18 16:29 work/online/190918.hst
-rw-r--r-- 1 hastings hastings 867288 Sep 18 16:29 work/online/mhf_1568683062_20190917_fedummy01.dat
-rw-r--r-- 1 hastings hastings 867288 Sep 18 16:29 work/online/mhf_1568683062_20190917_fedummy02.dat
-rw-r--r-- 1 hastings hastings    166 Sep 17 10:17 
work/online/mhf_1568683062_20190917_run_transitions.dat

And again, just as a sanity check:

% odbedit -c 'ls -l /History/Events'
Key name                        Type    #Val  Size  Last Opn Mode Value
---------------------------------------------------------------------------
1                               STRING  1     10    1m   0   RWD  FeDummy02
0                               STRING  1     16    1m   0   RWD  Run transitions

Regards,

Nick.
                      Reply  27 Sep 2019, Konstantin Olchanski, Forum, History plot problems for frontend with multiple indicies 
We should fix this for midas-2019-10.

https://bitbucket.org/tmidas/midas/issues/193/confusion-in-history-event-ids

K.O.





> Hi Konstantin,
> 
> > > [local:e666:S]History>ls -l /History/Events
> > > Key name                        Type    #Val  Size  Last Opn Mode Value
> > > ---------------------------------------------------------------------------
> > > 1                               STRING  1     10    2m   0   RWD  FeDummy02
> > > 0                               STRING  1     16    2m   0   RWD  Run transitions
> > 
> > Something is very broken. There should be more entries here, at least
> > there should be entries for "FeDummy01" and usually there is also an entry
> > for "FeDummy" because one invariably runs fedummy without "-i" at least once.
> 
> This is a fresh experiment that I started just to test this this issue, that is why there are not many 
> entries in /History/Events. I agree though that we should expect to see a FeDummy01 entry.
>  
> > The fact that changing from "midas" storage to "file" storage makes no difference
> > also indicates that something is very broken.
> > 
> > I want to debug this.
> > 
> > Since you tried the "file" storage, can you send me the output of "ls -l mhf*.dat" in the directory
> > with the history files? (it should have the "*.hst" files from the "midas" storage and "mhf*.dat" 
> files
> > from the "file" storage.
> 
> When I started this experiment yesterday(?) I disabled the Midas history when I enbled the file 
> history. Jsut now I reenabled the Midas history, so they are currently both active.
> 
> % ls -l work/online/{*.hst,mhf*.dat}
> -rw-r--r-- 1 hastings hastings  14996 Sep 17 10:21 work/online/190917.hst
> -rw-r--r-- 1 hastings hastings   3292 Sep 18 16:29 work/online/190918.hst
> -rw-r--r-- 1 hastings hastings 867288 Sep 18 16:29 work/online/mhf_1568683062_20190917_fedummy01.dat
> -rw-r--r-- 1 hastings hastings 867288 Sep 18 16:29 work/online/mhf_1568683062_20190917_fedummy02.dat
> -rw-r--r-- 1 hastings hastings    166 Sep 17 10:17 
> work/online/mhf_1568683062_20190917_run_transitions.dat
> 
> And again, just as a sanity check:
> 
> % odbedit -c 'ls -l /History/Events'
> Key name                        Type    #Val  Size  Last Opn Mode Value
> ---------------------------------------------------------------------------
> 1                               STRING  1     10    1m   0   RWD  FeDummy02
> 0                               STRING  1     16    1m   0   RWD  Run transitions
> 
> Regards,
> 
> Nick.
                         Reply  24 Aug 2020, Konstantin Olchanski, Forum, History plot problems for frontend with multiple indicies 
This turned out to be a tricky problem. I am adding a warning about it in mlogger. This should go into midas-
2020-07. Closing bug #193. K.O.


> We should fix this for midas-2019-10.
> 
> https://bitbucket.org/tmidas/midas/issues/193/confusion-in-history-event-ids
> 
> K.O.
> 
> 
> 
> 
> 
> > Hi Konstantin,
> > 
> > > > [local:e666:S]History>ls -l /History/Events
> > > > Key name                        Type    #Val  Size  Last Opn Mode Value
> > > > ---------------------------------------------------------------------------
> > > > 1                               STRING  1     10    2m   0   RWD  FeDummy02
> > > > 0                               STRING  1     16    2m   0   RWD  Run transitions
> > > 
> > > Something is very broken. There should be more entries here, at least
> > > there should be entries for "FeDummy01" and usually there is also an entry
> > > for "FeDummy" because one invariably runs fedummy without "-i" at least once.
> > 
> > This is a fresh experiment that I started just to test this this issue, that is why there are not many 
> > entries in /History/Events. I agree though that we should expect to see a FeDummy01 entry.
> >  
> > > The fact that changing from "midas" storage to "file" storage makes no difference
> > > also indicates that something is very broken.
> > > 
> > > I want to debug this.
> > > 
> > > Since you tried the "file" storage, can you send me the output of "ls -l mhf*.dat" in the directory
> > > with the history files? (it should have the "*.hst" files from the "midas" storage and "mhf*.dat" 
> > files
> > > from the "file" storage.
> > 
> > When I started this experiment yesterday(?) I disabled the Midas history when I enbled the file 
> > history. Jsut now I reenabled the Midas history, so they are currently both active.
> > 
> > % ls -l work/online/{*.hst,mhf*.dat}
> > -rw-r--r-- 1 hastings hastings  14996 Sep 17 10:21 work/online/190917.hst
> > -rw-r--r-- 1 hastings hastings   3292 Sep 18 16:29 work/online/190918.hst
> > -rw-r--r-- 1 hastings hastings 867288 Sep 18 16:29 work/online/mhf_1568683062_20190917_fedummy01.dat
> > -rw-r--r-- 1 hastings hastings 867288 Sep 18 16:29 work/online/mhf_1568683062_20190917_fedummy02.dat
> > -rw-r--r-- 1 hastings hastings    166 Sep 17 10:17 
> > work/online/mhf_1568683062_20190917_run_transitions.dat
> > 
> > And again, just as a sanity check:
> > 
> > % odbedit -c 'ls -l /History/Events'
> > Key name                        Type    #Val  Size  Last Opn Mode Value
> > ---------------------------------------------------------------------------
> > 1                               STRING  1     10    1m   0   RWD  FeDummy02
> > 0                               STRING  1     16    1m   0   RWD  Run transitions
> > 
> > Regards,
> > 
> > Nick.
Entry  12 Aug 2020, Yan Liu, Suggestion, adding db_get_mode ti check access mode for keys 
Hello,

I am wondering if there is a function that checks the access mode for a key? I 
found the db_set_mode() function that allows me to set the access mode for a key, 
but failed to find its counterpart get function.

Thanks in advance,
Yan
    Reply  13 Aug 2020, Stefan Ritt, Suggestion, adding db_get_mode ti check access mode for keys 
> Hello,
> 
> I am wondering if there is a function that checks the access mode for a key? I 
> found the db_set_mode() function that allows me to set the access mode for a key, 
> but failed to find its counterpart get function.
> 
> Thanks in advance,
> Yan


  KEY k;
  db_get_key(hDB, handle, &k);
  std::cout << k.access_mode << std::endl;

/Stefan
       Reply  13 Aug 2020, Yan Liu, Suggestion, adding db_get_mode ti check access mode for keys 
Thank you!

Yan

> > Hello,
> > 
> > I am wondering if there is a function that checks the access mode for a key? I 
> > found the db_set_mode() function that allows me to set the access mode for a key, 
> > but failed to find its counterpart get function.
> > 
> > Thanks in advance,
> > Yan
> 
> 
>   KEY k;
>   db_get_key(hDB, handle, &k);
>   std::cout << k.access_mode << std::endl;
> 
> /Stefan
ELOG V3.1.4-2e1708b5