Back Midas Rome Roody Rootana
  Midas DAQ System, Page 108 of 150  Not logged in ELOG logo
ID Date Authordown Topic Subject
  Draft   24 Aug 2020 Konstantin OlchanskiReleasemidas-2020-12
midas-2020-12-a is here.

new features and notable updates since midas-2020-03:

- new C++ ODB interface odbxx.h
- image history
- much improved history plots
- new sequencer pages
- UTF-8 clean ODB (complains if any TID_STRING is invalid UTF-8)
- mhttpd update to mongoose 6.16 with much improved mulththreading
- mhttpd update to use MBEDTLS in preference to problematic OpenSSL
- MidasConfig.cmake contributed by Mathieu Guigue

plans for next development: major update of mlogger to simplify channel 
configuration in odb, improvements to mhttpd multithreading, new history plot 
configuration page, more c++ification.

To obtain this release, either checkout the top of branch release/midas-2020-08 
(recommended) or checkout the tag midas-2020-08-a.

K.O.
  1994   08 Sep 2020 Konstantin OlchanskiForumjson parser error
> I am getting the following error alert in a custom page whenever a run starts
> json parser exception: SyntaxError: Unexpected token < in JSON at position 985, batch request: method: "db_get_values", params: [object Object], id: 1598691925697 method: "get_alarms", params: null, id: 1598691925697 method: "cm_msg_retrieve", params: [object Object], id: 1598691925697 method: "cm_msg_retrieve", params: [object Object], id: 1598691925697
> Does anyone know why and what causes this? This does not affect anything and things seem to continue running fine.

this is bug #242, https://bitbucket.org/tmidas/midas/issues/242/mjsonrpc-calls-should-return-valid-utf8

we read stuff from midas.log and push it to the web browser. we have seen this stuff
contain arbitrary binary data (both intentionally written into midas.log by cm_msg() and
file content corruption/truncation from computer crashes), the json decoder in the web browser
does not like that stuff - it is invalid utf-8 unicode - and throws an exception.

since we cannot ensure content of midas.log (and other files on disk) are always valid utf-8,
we have to sanitize it before sending it to the browser.

right now I am not sure of the best way to do this sanitizing. we do have a function to check
for valid utf-8 unicode, perhaps it should be extended to replace invalid unicode with spaces
or Xes or "?" or whatever, I am open to suggestions and ideas.

BTW, this is a new recent change to how strings generally work. C NUL-terminated strings are
permitted to contain arbitrary binary data (except for NUL char, of course). C++ std::string
are permitted to contain arbitrary binary data. but javascript strings are only permitted
to contain valid unicode, and the json standard was recently amended to require that json
strings are valid utf-8 unicode. So there is a disconnect between C/C++ code written in the
last 50 years where strings can contain binary data and the javascript world requiring
valid utf-8 unicode pretty much everywhere.

K.O.
  1995   08 Sep 2020 Konstantin OlchanskiForumTransition status message
> > The information you want is in the ODB:
> > * "/System/Transition/status" is the overall integer status code.
> > * "/System/Transition/error" is the overall error message string.
> > 
> > There is also per-client status information in the ODB:
> > * "/System/Transition/Clients/<client_name>/status"
> > * "/System/Transition/Clients/<client_name>/error"

You can also use web page .../resources/transition.html as an example of how
to read transition (and other) data from ODB into your own web page. example.html
may also be helpful.

K.O.
  1998   22 Sep 2020 Konstantin OlchanskiForumINT INT32 in experim.h
> For my analyzer I generate the experim.h file from the odb.
> 
> Before midas commit 13c3b2b this generates structs with INT data types. compiles fine with my analysis code (using the old mana.cpp)
> 
> newer midas versions generate INT32, ... types. I get a 
> 
> ‘INT32’ does not name a type   
> 
> although I include midas.h 
> 
> how to fix this?

You could run experim.h through "sed" to replace the "wrong" data types with the correct data types.

You can also #define the "wrong" data types before doing #include experim.h.

I put your bug report into our bug tracker, but for myself I am very busy
with the alpha-g experiment and cannot promise to fix this quickly.

https://bitbucket.org/tmidas/midas/issues/289/int32-types-in-experimh

Here is an example to substitute things using "sed" (it can also do "in-place" editing, "man sed" and google sed examples)
sed "sZshm_unlink(.*)Zshm_unlink(SHM)Zg"

K.O.
  2002   06 Oct 2020 Konstantin OlchanskiForumusing python client to start and stop run
> The ODB variable "/Runinfo/State" is a symptom of starting/stopping a run, rather than the cause.
> 
> In C++, one uses `cm_transition()` to start/stop runs.
> 
> In python code you can use the `start_run()` and `stop_run()` functions from `midas.client`: https://bitbucket.org/tmidas/midas/src/00ff089a836100186e9b26b9ca92623e672f0030/python/midas/client.py#lines-793:808

one can also run an external command: "mtransition START" and "mtransition STOP"

K.O.
  2005   13 Oct 2020 Konstantin OlchanskiInfoAbout remote control of front end part of MIDAS on chip
> My name is Soichiro Kuribayashi and I am a Ph.D. student at Kyoto University. 
> I'm a T2K collaborator and working for Super FGD which is new detector in ND280.

Hi! I did much of the DAQ software for the original FGD. I hope I can help.

> For the DAQ of Super FGD, we will run remotely front end part of MIDAS on ZYNQ 
> which is system on chip.

This would be the same as the existing FGD. Inside the FGD DCC is a Virtex4 FPGA
with a 300MHz PPC CPU running Linux from a CompactFlash card (Kentaro-san did this 
part). On this linux system runs the FGD DCC midas frontend. It connects
to the FGD midas instance using the mserver. This frontend executable is
copied to the DCC using "scp", there is no common nfs mounted home directory.

> For this remote control of front end part with mserver, we have to mount home 
> directory of DAQ PC(Cent OS8) on that of Linux on ZYNQ.
> So I wonder if we should use NFS(Network file system) + NIS(Network information 
> service) + autofs for the mounting. Is it correct?

Since you have a bigger SOC and you can run pretty much a complete linux,
I do recommend that you go this route. During development it is very convenient
to have common home directories on the main machine and on the frontend fpga
machines.

But this is not necessary. the midas mserver connection does not require
common (nfs-mounted) home directory, you can copy the files to the frontend
fpga using scp and rsync and you can use the gdb "remote debugger" function.

I can also suggest that on your frontend SOC/FPGA machine, you boot linux
using the "nfs-root" method. This way, the local flash memory only
contains a boot loader (and maybe the linux kernel image, depending on
bootloader limitations). The rest of the linux rootfs can be on your
central development machine. This way management of flash cards,
confusion with different contents of local flash and need to make backups
of frontend machines is much reduced.

If you use a fast SSD and ZFS with deduplication, you will also have good
performance gain (NFS over 1gige network to server with fast SSD works
so much better compared to the very slow SD/MMC/NAND flash).

I can point you to some of my documentation how we do this.

>
> If you have any information or any suggestion for the remote control on chip, 
> please let me know.
> 

I would say you are on a good track. For early development on just one board,
pretty much any way you do it will work, but once you start scaling up
beyound 3-4-5 frontends, you will start seeing benefits from common NFS-mounted
home directories, NFS-root booted linux, etc.

And of course you may want to study the existing ND280/FGD DAQ. I hope you
have access to the running system at Jparc. If not, I have a copy of
pretty much everything (except for running hardware, it is stored in the basement, 
dead) and I can give you access.

P.S. This reminds me that the cascade software from ND280 (they key part
for connecting the FGD, the TPC, the slow controls & etc into one experiment)
was never merged into the midas repository. I opened a ticket for this,
now we will not forget again:

https://bitbucket.org/tmidas/midas/issues/291/import-cascase-frontend-from-t2k-
nd280-fgd

K.O.
  2026   27 Nov 2020 Konstantin OlchanskiSuggestionODBSET wildcards with array keys in Sequencer files
> The following all fail with "Cannot find ODB key "<key>""
> 
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[*]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[0-9]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[1]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)*" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)" 0
> 

It would be cool if ODB pattern matching in the sequencer
were consistent with the ODB pattern matching in the json-rpc
interface for web pages:

https://midas.triumf.ca/MidasWiki/index.php/Mjsonrpc#Supported_array_index_syntax

K.O.
  2027   27 Nov 2020 Konstantin OlchanskiForumInvalid name "Analyzer/Tests"
> I've recently took the analyzer template from $MIDASSYS/examples/experiment and 
> modified it to be able to use Roody on a very simple frontend setup.

Hmm... the old midas analyzer framework is very old and I do not recommend
to use it for new experiments.

A newer analyzer system is ROOTANA and an even newer is the "m" analyzer (manalyzer). These
analyzers progressively introduce improved c++-style programming environments amongst other
improvements. If starting from scratch, I recommend that you use manalyzer (currently from the rootana
git repository).

> The analyzer works fine and I am able to view the online histograms but my console 
> prints out this error:
> 
> [Analyzer,ERROR] [odb.cxx:919:db_validate_name,ERROR] Invalid name 
> "/Analyzer/Tests/Always true/Rate [Hz]" passed to db_create_key_wlocked: should 
> not contain "["

The error says what it means. "[" is not a permitted character in odb names. It is used
by many odb functions to access array elements.

The midas analyzer example should be updated to change "[Hz]" to "(Hz)" or something similar.

K.O.
  2028   27 Nov 2020 Konstantin OlchanskiForumHistory plot consuming too much memory
> 
> Tested with midas-2020-08-a up until the HEAD of develop
> 

Just so you know, it took myself and Stefan quite a bit of effort
to improve memory and data handling in the new history plots
to be able to plot 1 year of data without bogging down too much. I got
to learn the google-chrome javascript cpu profiler, memory profiler
and the intricacies of javascript shift() and unshift() operators.

Before midas-2020-08-a, pressing the zoom-out button you would never
reach the javascript memory limit, the code would go into "100% cpu use"
and the browser tab will become progressively unresponsive well before
running out of memory. With the original code, our alpha-g history plots
could go back a few weeks at most, with the current code, we can go back
about 11 months. Compared to the old "C" history plots that can
do "last 10 years", no problem.

Loading all the history data into the browser is a design choice.

It has benefits and downsides.

The main benefit is that looking at immediate live data is much easier.

The main downside is that "plot last 10 years" becomes impossible.

As they say "appetite comes during eating", we have learned about these
downsides as we developed the new system. When we started, we did not
know much about javascript memory limits, cpu limits, etc. We did learn
a lot, though.

With the current code, we are limited to loading history data up to 50% of
the javascript memory limit. I know how to change the code to get up to 100%,
but I think it is not worth it, it still does not get as to plot "last 10 year".

We think the solution to recovering "last 10 years" capability is to use
binned data (which the history system can already deliver to javascript).
With binned data, the data volume in Mbytes remains constant, javascript
memory use has an upper-bound (we never use more memory than X Mbytes)
and data movement over the network is reduced.

Another way to look at this - typical display has only 1000-4000 vertical pixels,
it cannot physically display a bigger number of data points (no more
then 1 data point per pixel). So why load 1000000 data points when we only
can plot 1000-4000 of them?

So all the infrastructure for plotting binned data is already there,
but the javascript code still needs to be written. I think the biggest
challenge will be in blending or combining binned and unbinned data
on the same plot or in seamlessly switching the plot between binned and
unbinned data.

K.O.
  2029   27 Nov 2020 Konstantin OlchanskiForumHistory plot consuming too much memory
>
> With the current code, we are limited to loading history data up to 50% of
> the javascript memory limit.
>

The javascript memory limit itself seems to be a moving target. (google javascript 
memory limit, and good luck!).

Historically, javascript did not have any memory or cpu use limits, but with
the raise of abusive web sites, bitcoin miners, etc, I see browsers clamp down
on allowed/allocated CPU use (inactive tabs are throttled down). memory use
is already clamped down severely, on a 64 GB computer, a browser tab
can only allocate a handful of GBs.

This throttling of browser tabs is already intrusive enough that we need
to be careful in programming midas web pages. for examples throttled events
are not firing at the same rate or in the same order as in active tabs.

One logical conclusion of these restrictions could be that, eventually,
google-chrome permits only just enough cpu and memory to run gmail.

K.O.
  2030   27 Nov 2020 Konstantin OlchanskiForumHistory plot consuming too much memory
> 
> Are you sure that the delay comes from the browser or actually from mhttpd
> digging through GBytes of history data?
>

I think we will need to address this question "head-on". The history plot
will need to display the following information:

"time to load data from disk: N seconds, time to transfer data to javascript: M 
seconds, time to make the plot: Q seconds".

The second and third items are already available, the first one will need
to be computed in mhttpd and passed to javascript.

K.O.
  2031   27 Nov 2020 Konstantin OlchanskiForumHistory plot consuming too much memory
>
> Taking this down a tangent, I have a mild concern that a user could temporarily 
> flood our gigabit network if we do have faster disks to read the history data.
>

By my measurements, right now our javascript code can reach 30-50-70% of Gige ethernet
bandwidth, so, no, we cannot flood the network just by making history plots.

(we cannot reach 100% because javascript code is not multithreaded,
it cycles through "request new data" and "decode javascript, make plot" states,
and the network is idle in this second state).

>
> Have there been any plans or thoughts on limiting the bandwidth users can pull from 
> mhttpd?
>

10gige networking is here (and 5 and 2.5 Gige, too). I would not worry too much
about saturating 1gige network interfaces.

>
> I do not see this as a critical item as I can plan the future network 
> infrastructure at the same time as the next system upgrade (putting critical data 
> taking traffic on a separate physical network).
>

10gige network between all computers, everything on SSD ZFS arrays, except
bulk data on ZFS HDD arrays (only for cost reasons $$$/TB).

K.O.
  2032   27 Nov 2020 Konstantin OlchanskiInfoEquipment "common" settings in ODB
> Today I addressed a topic which bugged me since long time.

Right. No easy subject. For me, too, this has been a problem in MIDAS for a long time.

> Now, on each start of the frontend program, the values from equipment[] are written to 
> the ODB. They are still "live". If one changes them when the frontend is 
> running, that change takes effect immediately. But on the next restart of the 
> frontend, the old values from equipment[] is put back there.

There is a downside from this behaviour.

If some values in equipment/common are "live" and the user is expected to change them,
the user will be unpleasantly surprised when their changes magically disappear (after reboot,
after frontend crash, after run restart if experiment requires restarting some frontends
before starting a new run).

This change will also break some experiments that rely in things like specifying
event buffer names through ODB. But experiments can adapt and specify buffer names
through command line switch instead of ODB.

This new way also it makes the "live" Common/Period unusable. Sure I can speed up or slow
down a frontend even during the run, but if my change does not "stick", what good is it?

Personally, I think there is no easy solution for all these troubles.

I would advocate the following approach:

- think of MIDAS as a "mature" system,
- treasure backward compatibility
- (if we must break backward compatibility to introduce a new "must have" improvement, so be it)
- document how things work. if it is clearly written down what different fields in "common" do, fewer people 
"get burned" by unexpected or illogical things. (and any non-trivial system has plenty of those).

Going back to ODB equipment/common, my experience with midas and odb tells me
that one should avoid mixing together ODB entries set by user and ODB entries set by code.

For example, separating them as equipment/settings and equipment/variables works well. Mixing
them as in equipment/common and sequencer/state causes trouble.

So perhaps we should split Equipment/common into two pieces, user settable fields like
"Period" and "event buffer name" would move to equipment/settings or whatever.

This will open the discussion of which items in equipment/common should be user settable,
and some people would want event buffer specified in the code to prevail, while other
people would want the name from odb to prevail, and both are valid but conflicting preferences.

Or we could bite the bullet and say, equipment/common is controlled by the frontend code,
the user should not change it. (and mark it read-only in ODB).

For all the pain this may cause, at least this will make it self-consistent.

Per this proposal, in addition to Stefan's change, the hotlink on equipment/common goes away,
"period" is no longer "live" and the whole subdirectory is made "read-only".

K.O.
  2033   27 Nov 2020 Konstantin OlchanskiForumBuilding an experiment using CAEN VME interface - unknown type name 'VARIANT_BOOL'
> 
> The header file used to defined the CAEN types (CAENVMEtypes.h) defines 'CAEN_BOOL' like this:
> 
> 
> #ifdef LINUX
> #define CAEN_BYTE   	unsigned char
> #define CAEN_BOOL   	int
> #else
> #define CAEN_BYTE   	byte
> #define CAEN_BOOL   	VARIANT_BOOL
> #endif
> 

Complain to CAEN.

The year is 2020 and they should use standard C/C++ data types from stdint.h (uint32_t, etc).

K.O.
  2034   27 Nov 2020 Konstantin OlchanskiSuggestioncmake build fixes
Hi, Alexandr, thank you for making improvements to MIDAS. I have some question
about your suggestions:

> there are several problems with current cmake build files in midas:
> - not all systems have cuda libs in /usr/local/cuda
> - not all cmake version like when redefining vars

we do not see these problems with the normal cmake on our current linux systems,
centos-7 and -8, Ubuntu LTS 18.04, 20.04.

so you have something different? can you be a bit more specific,
which version of cmake and which OS you have so see these troubles?

> - c++ standard not matching the one used to build ROOT
> - ROOTSYS is not needed to find ROOT (it is enough to have root in PATH)

Again ROOT tangles with the build of MIDAS.

MIDAS does not use ROOT. As a convenience to the users, we have a "ROOT output" driver
in mlogger and we build a special executable rmlogger with ROOT. Only this special
executable should be linked with ROOT and compiled with ROOT-specific flags.

The rest of the MIDAS build should not be affected by presence or absence of ROOT.

One would have to read old messages on this forum to understand this situation.

> 
> I have posted pull request 'https://bitbucket.org/tmidas/midas/pull-requests/17'
> which tries to fix some of the problems.
> Tests and comments are welcome.
>

I look at the diffs:

- CUDA detection is changed to "find_package(CUDA)". This code was added by Joseph and Ben, and there 
must be a reason why they did not use find_package(CUDA). They will have to sign-off on this change.

- ROOT related logic assumes that all of MIDAS will be built "the ROOT way". CFLAGS are changed, the C++ 
standard is changed, etc. this assumption is wrong. only rmlogger and rmana should be built "with ROOT".

If you want to follow through on this, I suggest that you split the pull request into two,
one pull request for the CUDA changes and one pull request for the ROOT changes. Also rework
your ROOT changes as I explained above (but also read all ROOT-related messages on this forum).

K.O.
  2035   27 Nov 2020 Konstantin OlchanskiForumInvalid name "Analyzer/Tests"
https://bitbucket.org/tmidas/midas/issues/298/invalid-odb-names-in-example-midas
K.O.
  2037   27 Nov 2020 Konstantin OlchanskiInfoEquipment "common" settings in ODB
Yes, I think this will work.

For old mfe.c frontends, global variable set to "do it the new way" should be okey,
new experiments will have it the new way. Old experiments, will be forced to add a one-line definition
of this global variable (otherwise mfe.o will not link), at that time they get to chose "new way" or "old way".

For the new TMFE c++ frontend, this will work naturally when they create the Equipment Common object,
in the object constructor, you can see how it explicitly honors or overwrites the ODB common entries.

The TMFE frontend does not do a live "period", so there should be no issue with that.

Should I open a bitbucket issue "update TMFE frontend to new Equipment/Common scheme", to make sure
I do not forget about it?

K.O.
  2042   30 Nov 2020 Konstantin OlchanskiInfoEquipment "common" settings in ODB
> One more change: 
> 
> After using the new code for some hours, we realized that the "enabled" flag should not come from the frontend code, 
> but always be defined by the ODB. So if you quickly have to disable some equipment because the associated hardware is 
> off, you want to change this flag only in the ODB and not have to recompile the frontend. So we exclude that flag from 
> being set by the frontend. It is anyhow special, because one sees all disable equipment in the main midas status page, 
> so one knows what's on and what's off.
> 
> Please comment here if you think that change causes problem. Anyhow it's working now for the enabled flag as before 
> all these changes.
> 

Good catch. I still think this is fundamentally impossible to "get right". But good, you
are now in the same boat with me. The documentation will read: "if flag is TRUE, these data fields
are read from ODB, if flag is FALSE, those other fields are read from ODB". I will have to check
how this will work out for the TMFE C++ frontend (I think both mfe.c and TMFE frontends should
work "the same").

I think we have at least one month to play with this, I do not think we can do the next release
of midas until January.

K.O.
  2043   30 Nov 2020 Konstantin OlchanskiSuggestionODBSET wildcards with array keys in Sequencer files
> I totally agree that we should have a consistent formatting for array index expansion.
> I had a look to the mjsonrpc code and I found the function parse_array_index_list(...) which does this job.

Yes, it is good to review this stuff. I think the json-rpc call should accept more array index patterns:

a[*] - whole array (even though it is unnatural use in javascript, we do not say "let a[*] = b[*]", we say "let a = b".
a[5-] - from 5th element to the end (in the case we do not know the length)
a[-10] - from first element to element 10 (this is same as a[0-10], but needed for consistency with previous case).

K.O.

> I have a similar function (adapted form previous code) in odb.cxx called strarrayindex(...) that is designed for the same "consistency" purposes between odbedit and sequencer.
> 
> Let me put few points that I noticed:
> - mjsonrpc has a very different way to write the full array (no indexes given) while currently sequencer requires "[*]" to do the same (otherwise it only changes the first value of the array)
> - currently the sequencer and the underlying ODB calls use two indexes (that are the same if you want to write only one key) so we will need a serious rewriting to allow something like "ODBSET array[1,3,5]"
> - if I correctly understood the code, mjsonrpc instead generates a list of indices and then calls an ODB write on each of them. That's not always a good thing, for example if you are writing an array of n parameters on a DAQ 
> board you will call the hotlink on that key n times
> - in addition to that the sequencer will also have to cope with variable-based indexes like "ODBSET array[$val]", but then how it should parse something like "[$a,1]" or "[$a*]"?
> 
> For the very first point I do not see a clean way to do this without breaking the compatibility of existing sequencers or having a difference between the two implementations.
> For the others I guess we can find a way out, however that's a major modification so I will put it on my todo list when I can find some free time.
> In any case I would propose to merge the two functions, so we have only to maintain a single implementation of the parsing.
> 
> I guess it's a good moment to brainstorm about that, let me know what you think
> 
> Marco
> 
> 
> > > The following all fail with "Cannot find ODB key "<key>""
> > > 
> > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[*]" 0
> > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[0-9]" 0
> > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[1]" 0
> > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)*" 0
> > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)" 0
> > > 
> > 
> > It would be cool if ODB pattern matching in the sequencer
> > were consistent with the ODB pattern matching in the json-rpc
> > interface for web pages:
> > 
> > https://midas.triumf.ca/MidasWiki/index.php/Mjsonrpc#Supported_array_index_syntax
> > 
> > K.O.
  2044   30 Nov 2020 Konstantin OlchanskiSuggestionODBSET wildcards with array keys in Sequencer files
> 
> we are considering to make the range selection uniform among json, sequencer and 
> odbedit "set" command. Having multiple ranges like [1,4-5] will be quite some work, so 
> my question is did you just implement it on the json side because it was easy, or are 
> there experiments who really need it? Wouldn't it be enough to have
> 
> [*]
> [n]
> [n-m]
> 

It has been a long time, but most likely I designed the interface to work this
way to permit maximum flexibility for writing into an array using just one
rpc operation.

The generalized form is:
[range,range,range...]

where range is:
array index or
index1-index2 or
index2-index1 (write in reverse order)

This is all documented here:
https://midas.triumf.ca/MidasWiki/index.php/Mjsonrpc#Supported_array_index_syntax

I think it is too late to change it.

>
> This way we always have only one db_set_data() value behind that. Any set of indices 
> we have to split into several db_set_data()
> 

Sounds good. I think it is easy to have a common implementation:

One would need following functions:
parse range selection from string into std::vector<int> of array indices (we already have it)
call db_set_data_range() (this is easy to add).

>
> trouble by triggering a hot link on each access.
>

There is no escaping this trouble. Half the experiments want notification
"per array", the other half, "per array element". We cannot choose one or the other for them,
we have to provide a way for the user to say how they want it.

P.S. With existing ODB calls, you cannot do [n-m] ranges. You can do whole array or you
can do one-element-at-a-time.

K.O.
ELOG V3.1.4-2e1708b5