Back Midas Rome Roody Rootana
  Midas DAQ System, Page 61 of 142  Not logged in ELOG logo
ID Date Authordown Topic Subject
  2022   25 Nov 2020 Marco FrancesconiSuggestionODBSET wildcards with array keys in Sequencer files
Hi,
I guess the issue is in the "[?]" part of the command, the indexing is handled differently from the odb path and does not 
support "?".
Are you trying to set only the first 9 channels?
Could you try with "[*]" or "[0-9]" instead?

Marco

> I'm interested in using the matching feature for ODBSET explained on 
> https://midas.triumf.ca/MidasWiki/index.php/Sequencer for settings that are in an 
> array, like:
> 
> COMMENT "Ground the detectors"
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[?]" 0
> 
> Currently I get an error when I try to run this script.  Is this expected?  Would it 
> be possible to implement matching for array values?
> 
> Thanks!
  2024   25 Nov 2020 Marco FrancesconiSuggestionODBSET wildcards with array keys in Sequencer files
I created some keys in my ODB to try to match yours.
The ODBSET commands you wrote are all working fine (of course with different results), except only for the "/Detectors/Det*/Settings/Charge/Bias (V)*" which I will have to 
look into.
In any case the error message I'm getting is "could not match ay key" and not the one you are reporting.

Now I'm a bit puzzled:
Are you sure your ODB contains those keys?
Are you testing the ODBSET inside a more complex sequencer or on its own?

Maybe I can try to reproduce it using your ODB setup.
Could you send an ODB dump of the "/Detectors" folder using the "save" command of odbedit ("cd /Detectors" and then "save detector.odb")?

Best,

Marco


> The following all fail with "Cannot find ODB key "<key>""
> 
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[*]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[0-9]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[1]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)*" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)" 0
> 
> 
> > Hi,
> > I guess the issue is in the "[?]" part of the command, the indexing is handled differently from the odb path and does not 
> > support "?".
> > Are you trying to set only the first 9 channels?
> > Could you try with "[*]" or "[0-9]" instead?
> > 
> > Marco
> > 
> > > I'm interested in using the matching feature for ODBSET explained on 
> > > https://midas.triumf.ca/MidasWiki/index.php/Sequencer for settings that are in an 
> > > array, like:
> > > 
> > > COMMENT "Ground the detectors"
> > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[?]" 0
> > > 
> > > Currently I get an error when I try to run this script.  Is this expected?  Would it 
> > > be possible to implement matching for array values?
> > > 
> > > Thanks!
  2038   30 Nov 2020 Marco FrancesconiSuggestionODBSET wildcards with array keys in Sequencer files
I totally agree that we should have a consistent formatting for array index expansion.
I had a look to the mjsonrpc code and I found the function parse_array_index_list(...) which does this job.
I have a similar function (adapted form previous code) in odb.cxx called strarrayindex(...) that is designed for the same "consistency" purposes between odbedit and sequencer.

Let me put few points that I noticed:
- mjsonrpc has a very different way to write the full array (no indexes given) while currently sequencer requires "[*]" to do the same (otherwise it only changes the first value of the array)
- currently the sequencer and the underlying ODB calls use two indexes (that are the same if you want to write only one key) so we will need a serious rewriting to allow something like "ODBSET array[1,3,5]"
- if I correctly understood the code, mjsonrpc instead generates a list of indices and then calls an ODB write on each of them. That's not always a good thing, for example if you are writing an array of n parameters on a DAQ 
board you will call the hotlink on that key n times
- in addition to that the sequencer will also have to cope with variable-based indexes like "ODBSET array[$val]", but then how it should parse something like "[$a,1]" or "[$a*]"?

For the very first point I do not see a clean way to do this without breaking the compatibility of existing sequencers or having a difference between the two implementations.
For the others I guess we can find a way out, however that's a major modification so I will put it on my todo list when I can find some free time.
In any case I would propose to merge the two functions, so we have only to maintain a single implementation of the parsing.

I guess it's a good moment to brainstorm about that, let me know what you think

Marco


> > The following all fail with "Cannot find ODB key "<key>""
> > 
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[*]" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[0-9]" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[1]" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)*" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)" 0
> > 
> 
> It would be cool if ODB pattern matching in the sequencer
> were consistent with the ODB pattern matching in the json-rpc
> interface for web pages:
> 
> https://midas.triumf.ca/MidasWiki/index.php/Mjsonrpc#Supported_array_index_syntax
> 
> K.O.
  2218   16 Jun 2021 Marco FrancesconiInfo1000 Mbytes/sec through midas achieved!
As reported by Stefan, in MEG II we have very similar ethernet throughputs.
In total, we have 34 crates each with 32 DRS4 digitiser chips and a single 1 Gbps readout link through a Xilinx Zynq SoC.
The data arrives in push mode without any external intervention, the only throttling being an optional prescaling on the trigger rate.
We discovered the hard way that 1 Gbps throughput on Zynq is not trivial at all: the embedded ethernet MAC does not support jumbo frames (always read the fine prints in the manuals!) and the embedded Linux ethernet stack seems to struggle when we go beyond 250 Mbps of UDP traffic.

Anyhow, even with the reduced speed, the maximum throughput at network input is around 8.5 Gbps which passes through the Mikrotik switches mentioned by Stefan.
We had very bad experiences in the past with similar price-point switches, observing huge packet drops when the instantaneous switching capacity cannot cope with the traffic, but so far we are happy with the Mikrotik ones.

On the receiver side, we have the DAQ server with an Intel E5-2630 v4 CPU and a 10 Gbit connection to the network using an Intel X710 Network card.
In the past, we used also a "cheap" 10 Gbit card from Tehuti but the driver performance was so bad that it could not digest more than 5 Gbps of data.

The current frontend is based on the mfe.c scheme for historical reasons (the very first version dates back to 2015).
We opted for a monolithic multithread solution so we can reuse the underlying DAQ code for other experiments which may not have the complete Midas backend.
Just to mention them: one is the FOOT experiment (which afaik uses an adapted version of Altas DAQ) and the other is the LOLX experiment (for which we are going to ship to Canada soon a small 32 channel system using Midas).
A major modification to Konstantin scheme is that we need to calibrate all WFMs online so that a software zero suppression can be applied to reduce the final data size (that part is still to be implemented).
This requirement results in additional resource usage to parse the UDP content into floats and calibrate them.
Currently, we have 7 packet collector threads to digest the full packet flow (using recvmmsg), followed by an event building stage that uses 4 threads and 3 other threads for WFM calibration.
We have progressive packet numbers on each packet generated by the hardware and a set of flags marking the start and end of the event; combining the packet number difference between the start and end of the event and the total received packets for that event it is really easy to understand if packet drops are happening.

All the thread infrastructure was tested and we could digest the complete throughput, we still have to finalise the full 10 Gbit connection to Midas because the final system has been installed only recently (April).
We are using EQ_USER flag to push events into mfe.c buffers with up to 4 threads, but I was observing that above ~1.5 Gbps the rb_get_wp() returns almost always DB_TIMEOUT and I'm forced to drop the event.
This conflicts with the measurements reported by Stefan (we were discussing this yesterday), so we are still investigating the possible cause.

It is difficult to report three years of development in a single Elog, I hope I put all the relevant point here.
It looks to me that we opted for very complementary approaches for high throughput ethernet with Midas, and I think there are still a lot of details that could be worth reporting.
In case someone organises some kind of "virtual workshop" on this, I'm willing to participate.
Best,

Marco


> In MEG II we also kind of achieved this rate. Marco F. will post an entry soon to describe the details. There is only one thing 
> I want to mention, which is our network switch. Instead of an expensive high-grade switch, we chose a cheap "Chinese" high-grade 
> switch. We have "rack switches", which are collector switch for each rack receiving up to 10 x 1GBit inputs, and outputting 1 x 
> 10 GBit to an "aggregation switch", which collects all 10 GBit lines form rack switches and forwards it with (currently a single 
> ) 10 GBit line. For the rack switch we use a 
> 
> MikroTik CRS354-48G-4S+2Q+RM 54 port
> 
> and for the aggregation switch
> 
> MikroTik CRS326-24S-2Q+RM 26 Port
> 
> both cost in the order of 500 US$. We were astonished that they don't loose UDP packets when all inputs send a packet at the 
> same time, and they have to pipe them to the single output one after the other, but apparently the switch have enough buffers 
> (which is usually NOT written in the data sheets). 
> 
> To avoid UDP packet loss for several events, we do traffic shaping by arming the trigger only when the previous event is 
> completely received by the frontend. This eliminates all flow control and other complicated methods. Marco can tell you the 
> details.
> 
> Another interesting aspect: While we get the data into the frontend, we have problems in getting it through midas. Your 
> bm_send_event_sg() is maybe a good approach which we should try. To benchmark the out-of-the-box midas, I run the dummy frontend 
> attached on my MacBook Pro 2.4 GHz, 4 cores, 16 GB RAM, 1 TB SSD disk. I got
> 
> Event size: 7 MB
> 
> No logging: 900 events/s = 6.7 GBytes/s
> 
> Logging with LZ4 compression: 155 events/s = 1.2 GBytes/s
> 
> Logging without compression: 170 events/s = 1.3 GBytes/s
> 
> So with this simple approach I got already more than 1 GByte of "dummy data" through midas, indicating that the buffer 
> management is not so bad. I did use the plain mfe.c frontend framework, no bm_send_event_sg() (but mfe.c uses rpc_send_event() which is an 
> optimized version of bm_send_event()).
> 
> Best,
> Stefan
  2235   25 Jun 2021 Marco FrancesconiBug Fixchanges in history plots
We are using the new history formula as a quick way to convert signals from sensors to actual physical values (for example Voltage->Temperature, Voltage->relative humidity 
...), so it is great that the shown voltage is the calculated one.

I would like to add a point to this discussion.
In our collaboration people attach images of history plots to elogs, meeting presentation and/or physical logbooks.
The proposed scaling formula may work fine online using the cursors, but, once an image is created, I do not understand how it is possible to extract the value for a scaled 
variables.
Suppose you see a graph in a presentation with a current increase by some PSU and the current was scaled to be in the same plot of the voltage.
Looking at the delta in the image, how can you judge the current increase without any axis/grid to refer to?

So I support Stefan proposal for a secondary axis, as long as it is clear which value belong to which axis.
Maybe marking the channels in the description or using different line styles/thickness?

Best,
Marco
  2240   28 Jun 2021 Marco FrancesconiSuggestionODB Load in Sequencer
Hi all,
for my experiment we ended up with the need of changing lot of parameters (~9000 values) in the ODB at once by the sequencer.
The very first solution was to use a sequencer function with a ton of ODBSET calls, however a more elegant solution may be to provide an "ODBLoad" command which mimics the "load" command of odbedit.
I already have a working modification to the sequencer for this, if you agree I will commit it to a dedicated brach.
Let me know if you think this is a good approach.

Marco F
  2247   28 Jun 2021 Marco FrancesconiSuggestionODB Load in Sequencer
My idea was to collect some feedback instead of blindly submitting code for a pull request.

Currently I'm just calling db_load() with a given file, so it is only supporting .odb formatting.
It is pretty easy to extend to json by calling the db_load_json() depending on the file extension.
I do not see a similar call for the .xml format, maybe I can study tomorrow how it is implemented in odbedit and port it to the sequencer.

I guess that the ODBPasteJSON can be a solution as well but I find it a bit too technical.
Anyway it is easy to implement just by calling db_paste_json(), I will keep this in mind.

I'll try to sort this out and make a commit soon.
Best,

Marco



> > ... at MEG, we keep hundreds of XML files for configuration. Mostly historical, but that's how it is.
> 
> same here, lots of historical .odb and .xml files.
> 
> I think the .odb and .xml support is here to stay. Best I remember, latest things I fixed in both
> was support for unlimited string length (and removal of associated buffer overflows). Right now,
> I am not sure if both are UTF-8 clean and if they properly escape all control characters,
> something to fix as we go or as we bump into problems.
> 
> K.O.
  2248   29 Jun 2021 Marco FrancesconiSuggestionODB Load in Sequencer
I just submitted a pull request for this feature, I did quite a lot of testing and it looks good to me.
Let me know if something is not clear.

I'll take care of adding the relevant informations to the wiki once it is merged.
Best,

Marco


> My idea was to collect some feedback instead of blindly submitting code for a pull request.
> 
> Currently I'm just calling db_load() with a given file, so it is only supporting .odb formatting.
> It is pretty easy to extend to json by calling the db_load_json() depending on the file extension.
> I do not see a similar call for the .xml format, maybe I can study tomorrow how it is implemented in odbedit and port it to the sequencer.
> 
> I guess that the ODBPasteJSON can be a solution as well but I find it a bit too technical.
> Anyway it is easy to implement just by calling db_paste_json(), I will keep this in mind.
> 
> I'll try to sort this out and make a commit soon.
> Best,
> 
> Marco
> 
> 
> 
> > > ... at MEG, we keep hundreds of XML files for configuration. Mostly historical, but that's how it is.
> > 
> > same here, lots of historical .odb and .xml files.
> > 
> > I think the .odb and .xml support is here to stay. Best I remember, latest things I fixed in both
> > was support for unlimited string length (and removal of associated buffer overflows). Right now,
> > I am not sure if both are UTF-8 clean and if they properly escape all control characters,
> > something to fix as we go or as we bump into problems.
> > 
> > K.O.
  2533   13 Jun 2023 Marco FrancesconiForumInclude subroutine through relative path in sequencer
> > Hi, I would like to restructure our sequencer scripts and the paths. Until now many things are not generic at all. I would like to ask if it is possible to include files through a relative path for example something like 

> > INCLUDE ../chip/global_basic_functions

> > Maybe I just did not found how to do it.

> 

> It was not there. I implemented it in the last commit.

> 

> Stefan



Hi Stefan,

when I did this job for MEG II we decided not to include relative paths and the ".." folder to avoid an exploit called "XML Entity Injection".

In short is to avoid leaking files outside the sequencer folders like  /etc/password or private SSH keys.

I do not remember in this moment why we pushed for absolute paths instead but let's keep this in mind.



Marco
  2173   26 May 2021 Marco ChiappiniInfolabel ordering in history plot
Dear all,
is there any way to order the labels in the history plot legend? In the old 
system there was the “order” column in the config panel, but I can not find it 
in the new system. Thanks in advance for the support.

Best regards,
Marco Chiappini
  2597   12 Sep 2023 Maia Henriksson-WardSuggestionSyntax highlighting for sequencer scripts
Recently I was trying to read sequencer scripts written by a previous student, and realized it would be easier to 
quickly read/skim sequencer code with some form of syntax highlighting. I've been using Visual Studio Code as my 
editor, so I made myself an extension for VS Code that provides basic syntax highlighting (with help from 
ChatGPT-3.5). It's good enough for my purposes, but is missing some features you'd expect for full language 
support. This got me wondering - does anything like this already exist, perhaps with more complete support?

If it doesn't already exist, and if there is interest, I could to publish mine 
to vscode's "Extension Marketplace" for easy installations (I'd also welcome contributions for 
more features). For now, I've installed it on my computer directly from the .vsix file, which I've put on my own 
github at https://github.com/maia-hw/midas-sequencer-support . There is also a readme with screenshot showing what scripts 
will look like with the highlighting
  2718   26 Feb 2024 Maia Henriksson-WardForummserver ERR message saying data area 100% full, though it is free
> Hi,
> 
> I have just installed Midas and set-up the ODB for a SuperCDMS test-facility (on
> a SL6.7 machine). All works fine except that I receive the following error message:
> 
> [mserver,ERROR] [odb.c:944:db_validate_db,ERROR] Warning: database data area is
> 100% full
> 
> Which is puzzling for the following reason:
> 
> -> I have created the ODB with: odbedit -s 4194304
> -> Checking the size of the .ODB.SHM it says: 4.2M
> -> When I save the ODB as .xml and check the file's size it says: 1.1M
> -> When I start odbedit and check the memory usage issuing 'mem', it says: 
> ...
> Free Key area: 1982136 bytes out of 2097152 bytes
> ...
> Free Data area: 2020072 bytes out of 2097152 bytes
> Free: 1982136 (94.5%) keylist, 2020072 (96.3%) data
> 
> So it seems like nearly all memory is still free. As a test I created more
> instances of one of our front-ends and checked 'mem' again. As expected the free
> memory was decreasing. I did this ten times in fact, reaching
> 
> ...
> Free Key area: 1440976 bytes out of 2097152 bytes
> ...
> Free Data area: 1861264 bytes out of 2097152 bytes
> Free: 1440976 (68.7%) keylist, 1861264 (88.8%) data
> 
> So I could use another >20% of the database data area, which is according to the
> error message 100% (resp. >95%) full. Am I misunderstanding the error message?
> I'd appreciate any comments or ideas on that subject!
> 
> Thanks, Belina

This is an old post, but I encountered the same error message recently and was looking for a 
solution here. Here's how I solved it, for anyone else who finds this: 
The size of .ODB.SHM was bigger than the maximum ODB size (4.2M > 4194304 in Belina's case). For us, 
the very large odb size was in error and I suspect it happened because we forgot to shut down midas 
cleanly before shutting the computer down. Using odbedit to load a previously saved copy of the ODB 
did not help me to get .ODB.SHM back to a normal size. Following the instructions on the wiki for 
recovery from a corrupted odb, 
https://daq00.triumf.ca/MidasWiki/index.php/FAQ#How_to_recover_from_a_corrupted_ODB, (odbinit with --cleanup option) should 
work, but didn't for me. Unfortunately I didn't save the output to figure out why. My solution was to manually delete/move/hide 
the .ODB.SHM file, and an equally large file called .ODB.SHM.1701109528, then run odbedit again and reload that same saved copy of my ODB. 
Manually changing files used by mserver is risky - for anyone who has the same problem, I suggest trying odbinit --cleanup -s 
<yoursize> first.
  437   19 Feb 2008 Maggie LeeBug Fix"make install" error on MacOS 10.4.7, svn 3366
> While executing "make install" under MacOS 10.4.7, you may encounter errors about "dio". It is the 
> problem of "Makefile". I did some change to it and attach the diff file here.

Thank you very much for your instructions for installing Midas on MacOSX.
I followed your instructions to change the Makefile but I still get the following error message:

... 
... Installing programs and utilities to /usr/local/bin
... 
install: darwin/bin/lazylogger exists but is not a directory
install: darwin/bin/mchart exists but is not a directory
install: darwin/bin/mcnaf exists but is not a directory
install: darwin/bin/mdump exists but is not a directory
install: darwin/bin/melog exists but is not a directory
install: darwin/bin/mhdump exists but is not a directory
install: darwin/bin/mhist exists but is not a directory
install: darwin/bin/mhttpd exists but is not a directory
install: darwin/bin/mlogger exists but is not a directory
install: darwin/bin/mlxspeaker exists but is not a directory
install: darwin/bin/mserver exists but is not a directory
install: darwin/bin/mstat exists but is not a directory
install: darwin/bin/mtape exists but is not a directory
install: darwin/bin/odbedit exists but is not a directory
install: darwin/bin/odbhist exists but is not a directory
install: darwin/bin/stripchart.tcl exists but is not a directory
install: darwin/bin/webpaw exists but is not a directory
make: *** [install] Error 71

Could you help me solve this problem? Thank you in advance =)
  438   19 Feb 2008 Maggie LeeBug Fix"make install" error on MacOS 10.4.7, svn 3366
I forgot to mention that, the following (and similar) lines:
           install -v -D -m 755 $$file $(SYSBIN_DIR)/`basename $$file` ; \
are changed into
           install -v -d -m 755 $$file $(SYSBIN_DIR)/`basename $$file` ; \

since -D is an illegal option for install. I am not sure whether -D in Linux means the same thing for -d in MacOSX install. 


> > While executing "make install" under MacOS 10.4.7, you may encounter errors about "dio". It is the 
> > problem of "Makefile". I did some change to it and attach the diff file here.
> 
> Thank you very much for your instructions for installing Midas on MacOSX.
> I followed your instructions to change the Makefile but I still get the following error message:
> 
> ... 
> ... Installing programs and utilities to /usr/local/bin
> ... 
> install: darwin/bin/lazylogger exists but is not a directory
> install: darwin/bin/mchart exists but is not a directory
> install: darwin/bin/mcnaf exists but is not a directory
> install: darwin/bin/mdump exists but is not a directory
> install: darwin/bin/melog exists but is not a directory
> install: darwin/bin/mhdump exists but is not a directory
> install: darwin/bin/mhist exists but is not a directory
> install: darwin/bin/mhttpd exists but is not a directory
> install: darwin/bin/mlogger exists but is not a directory
> install: darwin/bin/mlxspeaker exists but is not a directory
> install: darwin/bin/mserver exists but is not a directory
> install: darwin/bin/mstat exists but is not a directory
> install: darwin/bin/mtape exists but is not a directory
> install: darwin/bin/odbedit exists but is not a directory
> install: darwin/bin/odbhist exists but is not a directory
> install: darwin/bin/stripchart.tcl exists but is not a directory
> install: darwin/bin/webpaw exists but is not a directory
> make: *** [install] Error 71
> 
> Could you help me solve this problem? Thank you in advance =)
  441   19 Feb 2008 Maggie LeeBug Fix"make install" error on MacOS 10.4.7, svn 3366
Thank you for your help =)

Since SYSBIN_DIR is defined as /usr/local/bin in the Makefile and it exists in my computer, so I deleted the -D in the Makefile and tried to "make install" again and the 
error message becomes:

... 
... Installing programs and utilities to /usr/local/bin
... 
/bin/sh: -c: line 2: syntax error: unexpected end of file
make: *** [install] Error 2

Can anyone help me solve this problem? 


> > I forgot to mention that, the following (and similar) lines:
> >            install -v -D -m 755 $$file $(SYSBIN_DIR)/`basename $$file` ; \
> > are changed into
> >            install -v -d -m 755 $$file $(SYSBIN_DIR)/`basename $$file` ; \
> > 
> > since -D is an illegal option for install. I am not sure whether -D in Linux means the same thing for -d in MacOSX install. 
> 
> -D under linux means:
> 
>        -D     create all leading components of DEST except the last, then
>               copy SOURCE to DEST; useful in the 1st format
> 
> This means if you install the first time, and eithe SYSBIN_DIR or `basename is not existing, it will be created on-the-fly from
> the install program. If OSX does not support this, you somehow have to crate these subdirectories manually.
  1352   12 Mar 2018 Lukas GerritzenForumEQ_MANUAL_TRIG no button in web interface
Hi,

according to the wiki, setting the equipment flag EQ_MANUAL_TRIG is supposed to
have the mhttpd webinterface provide a button for manual triggering. It appears that just setting this flag is not enough or this feature is broken. The equipment shows up, but no button to manually trigger it.

A somewhat related question: Can I log this kind of event while the current run is stopped or is it necessary to start a dedicated run for this?

Cheers
Lukas
  1383   24 Aug 2018 Lukas GerritzenForumInt64 datatype
I would like to store the address of 1-Wire temperature sensors in a device
driver. However, the supportet data types (as definded around
include/midas.h:311) do not foresee a type large enough.

Is there a good reason against this?

I know that other experiments use this kind of sensor, how do you store the
addresses? I've noticed that most of the address is just zeroes, but I wouldn't
like to store just half the address, assuming that half the address is always
zeroes.
  1385   28 Aug 2018 Lukas GerritzenBug ReportDeleting Links in ODB via mhttpd
Asume you have a variable foo and a link bar -> foo. When you go to the ODB in
mhttpd, click "Delete" and select bar, it actually deletes foo. bar stays,
stating "<cannot resolve link>". Trying the same in odbedit with rm gives the
expected result (bar is gone, foo is still there).

I'm on the develop branch.
  1386   28 Aug 2018 Lukas GerritzenForumProblems with virtual history events
Hi,
I am trying to set up virtual history events following
https://midas.triumf.ca/MidasWiki/index.php/History_System#Virtual_History_Event 

Trying it the first way, using the following setup:
Key name                        Type    #Val  Size  Last Opn Mode Value
---------------------------------------------------------------------------
Links                           DIR
    dirlink -> External/dir     KEY     1     12    >99d 0   RWD  <subdirectory>


Key name                        Type    #Val  Size  Last Opn Mode Value
---------------------------------------------------------------------------
External                        DIR
    dir                         DIR
        foo                     FLOAT   1     4     16s  0   RWD  12.5


Then I get the following error message:
==================== History link "dirlink", ID 28150  =======================
[Logger,ERROR] [mlogger.cxx:4942:open_history,ERROR] History event dirlink has
no variables in ODB


Trying the second way, I set up the following:
Key name                        Type    #Val  Size  Last Opn Mode Value
---------------------------------------------------------------------------
Links                           DIR
    dir                         DIR
        testlink -> External/foo
                                FLOAT   1     4     8m   0   RWD  5.2

Key name                        Type    #Val  Size  Last Opn Mode Value
---------------------------------------------------------------------------
External                        DIR
    foo                         FLOAT   1     4     6m   0   RWD  5.2


Starting mlogger in verbose mode yields the following error:
==================== History link "dir", ID 28150  =======================
[Logger,ERROR] [mlogger.cxx:4935:open_history,ERROR] History link
/History/Links/dir/testlink is invalid
Error in history system, aborting startup.

I'm not sure if this is a bug or just a case of PEBCAK.

Finally, to set the update period, do I need entries in /history/links periods
with the tag name? Is there a way to only write them in the history file when
they change? I want to use the virtual history events for measurements I get
from external scripts, some periodic, some manual.

Thanks
  1398   24 Sep 2018 Lukas GerritzenSuggestionSelf-resetting alarm class
If you run an external script anyway, you can also call "odbedit -c alarm" to
reset all alarms. Or you could try to set the "Triggered" entry of that certain
alarm to 0 (again, with odbedit), that could also work.
ELOG V3.1.4-2e1708b5