Back Midas Rome Roody Rootana
  Midas DAQ System, Page 124 of 150  Not logged in ELOG logo
ID Date Authordown Topic Subject
  2991   21 Mar 2025 Konstantin OlchanskiForumLabView-Midas interface
> > Hello,
> > 
> > Does anyone have experience with writing a MIDAS frontends to communicate with a device that operates using LabView (e.g. superconducting magnets, cryostats etc.). Any information or experience regarding this would be highly appreciated.
> > 
> > thanks,
> > Zaher
> 
> We do have a superconducting magnet from Cryogenic, UK, which comes with a LabView control program on a Windows PC. I did the only reasonable with this: trash it in the waste basket. Do NOT use Labveiw for anything which should run more than 24h in a row. Too many bad experiences with LabView control programs 
> for separators at PSI and other devices. Instead of the Windows PC, we use MSCB devices and RasperryPis to communicate with the power supply directly, which has been proven to be much more stable (running for years without crashes). I'm happy to share our code with you.
>

Our parallel experience with the CERN ALPHA anti-hydrogen experiment: they have developed a whole labview empire
to control the cryogenics, the magnets, the positron source, the anti-proton trap, anti-hydrogen trap, etc.

At some point there was a wall of monitors in the counting room - each labview computer controlled one or two things -
so there is very many computers and each had to have a monitor (and mouse and keyboard).

All the data from this labview empire is logged to MIDAS history via felabview and feGEM, and they use
the MIDAS history to look and monitor almost everything. Control is done via Labview and Labview
based FPGA sequencers (National Instruments PXI hardware, $$$$$).

This works reasonably well to publish several papers in Nature.

But not 100%:

1) difficulties with labview source control (cannot be trivially managed by git, I guess)
2) unending fight against Microsoft and CERN IT trying to reboot the computers at the wrong time
3) more recently, forced Microsoft updates require trashing perfectly good machines and buy new ones

At TRIUMF there is very little Labview. All experiments use MIDAS and EPICS for most things.

Based on this experience, I agree with Stefan, today's sweet spot is RaspberryPi machines with USB attached
gizmos to control stuff. On the software side, drive the mess with MIDAS and custom web pages.

K.O.
  2992   21 Mar 2025 Konstantin OlchanskiBug Fixbitbucket builds fixed
> > bitbucket automatic builds
> 
> Unfortunately we will break the automatic build each time a program outputs one different character, which even might happen if we add a line of code and 
> a cm_msg() gets produced with a different line number. Is there a standard way to update testexpt.example (like "make testexpt" or so). Should be trigger 
> the update of testexpt.example before each commit via a hook? 
> 

Actually line numbers are not logged by messages printed from "make test", so moving code around does not break the test.

Changing what programs output does break the test and this is intentional - somebody must look at confirm
that program output was changed on purpose or because a bug was introduced (or fixed).

Most "make test" things work this way - run programs, compare output to what is expected. Discrepancies are flagged for human examination.

K.O.
  2993   21 Mar 2025 Konstantin OlchanskiSuggestionimproved find_package behaviour for Midas
> > > currently to link Midas to project one has to do several steps ...
> > this information is incorrect. please read https://daq00.triumf.ca/elog-midas/Midas/2258
> 
> I admit that i did not see your post about targets import
> via `include($ENV{MIDASSYS}/lib/midas-targets.cmake)`
> before implementing changes to cmake scripts.
>
> But in this respect the way you propose to do it via `include` should still work.
>

I proposed nothing, you did the proposing. I spent many hours trying to understand cmake (mission 
impossible!) and many more hours to implement the previously existing package scheme based
on the cmake "EXPORT" function.

> Note however that `include(...)` way is very unusual as one have to know exactly
> where `...-targets.cmake` is located and standard way in cmake is via `find_package`
> (similar to how e.g. ROOT, Geant4, etc. are found and linked).

Very difficult to cut-and-paste "include($ENV{MIDASSYS}/lib/midas-targets.cmake)".

You cannot simplify-out $ENV{MIDASSYS} because computer cannot read your mind, which of the 10 copies
of midas you want to use from which user account on which day.

Argument about "very unusual" I would buy, I am not a cmake expert and I do not know which package
finding method is in favour today.

>
> Though i again admit that maybe the namespace change was a bit too much as it may
> have broken previous users of `include($ENV{MIDASSYS}/lib/midas-targets.cmake)`
> 

I believe it did break at least one experiment, after updating MIDAS to latest version,
the analyzer would not build.

Speaking of which, did you implement your new scheme for the manalyzer so that it works
in standalone mode (without MIDAS)?

If you did not, now we have two schemes, your new scheme just for MIDAS and my old scheme
for manalyzer *and* MIDAS. xkcd 927.

> 
> - shortcomings of what was before is usage of non-standard `include(...)`
>

You should have started by posting a message spelling it out: Konstantin implemented
a scheme that uses the cmake "export" function to find midas, mfe and manalyzer,
it is very nice and works ok, but it is non-standard/obsoleted/obscure/frowned-upon/
unpopular/I-do-not-like-it/I-did-not-invent-it, and I propose implementing a new scheme
based on find_package().

>
> - one shortcoming i see for new implementation is usage `midas::` namespace
>   (mentioned above) that may have broken some setups
> 

If you think that your changes will break other people code, you should explicitely
say this in a message to this forum and hopefully provide instruction on fixing it,
i.e. in your makefile, please replace "midas" with "midas::midas".

> 
> - `find_package` is standard and recommended way of finding packages
>

Do you have a reference for this? When I look at cmake documentation, I do not see
any specific recommendation on creating packages and finding them. I do see
other people's code for finding packages and often spend hours fighting
them because said methods are designed to work only on the developer's laptop.

P.S. Did anybody ask Ben to update the MidasWiki documentation with the new find_package() information?

K.O.
  2994   21 Mar 2025 Konstantin OlchanskiSuggestionBINARY INCOMPATIBLE CHANGE: New alarm count added
> > > ALL MIDAS CLIENTS GET RE-COMPILED after the new code is applied.
>
> - I did clearly announce this change in the forum.
>

I missed the announcement and I was surprised to discover this change.

This is why I changed the title from "useful improvement" to "ACHTUNG!!!"

> - The extension is not there "just for fun" but needed by several experiments which experience spurious alarms 
> triggered by some noisy signals.

Agreed, a useful improvement.

To prevent this kind of trouble in the future, I think I should finish my crusade against db_get_record().

> - If there are frontend programs which have not been re-compiled for ten years, I think it is very unlikely that these 
> experiments need the latest cutting edge version of midas. The can safely stick to your tag midas-2024-12-a and not 
> experience the issue of different /Alarm trees.

Yes, now that I added the tag, posted a more visible notice and people who have to run frontends build against older midas 
(because reasons) can breathe out.

Also please note that "Konstantin != all experiments at TRIUMF".

K.O.
  2998   25 Mar 2025 Konstantin OlchanskiBug Reportmidas equipment "format"
> In the PSI muSR laboratory, we are running about 140 slow control devices across six instruments using Format FIXED.
> Could you please wait a bit with removing support for it so that we can assess if/how this will affect us?

Roger. I believe as long as these data do not go into the MIDAS event data stream, you should not see any difference from 
changing "FIXED" to "MIDAS", I hope you can test it and report how it works for you. If you see that things change in ODB 
or in history, perhaps we can implement some kind of workaround to make the transition transparent.

K.O.
  2999   25 Mar 2025 Konstantin OlchanskiBug Reportmanalyzer module init order problem
> Andrea Capra reported a problem with manalyzer module initialization order.

Permanent solution is now implemented.

In the analyzer module constructor (TARunObject), please set the value of fModuleOrder (default value is 0).

Modules with smaller fModuleOrder will run first, modules with bigger fModuleOrder will run last.

Please use the value range between -1 (built-in EventDump module) and 9999 (built-in InteractiveModule).

Modules with equal fModuleOrder will run in the same order as they were registered (std::stable_sort).

Example manalyzer modules and documentation README.md have been updated.

Linker command line and GCC attribute methods should also work, but I thought it better to provide explicit 
programmatic control of module ordering.

P.S. the c++ tmfe frontend module design is newer and the problem of module (equipment) ordering is solved by 
constructing them explicitly in main().

manalyzer commit:

commit 6ca130808cd05ead734450391bf6defc8335c04a (HEAD -> master, origin/master, origin/HEAD)
Author: Konstantin Olchanski <olchansk@daq00.triumf.ca>
Date:   Tue Mar 25 15:53:13 2025 -0700

    implement TARunObject::fModuleOrder to specify required ordering of analyzer modules

K.O.
  3000   25 Mar 2025 Konstantin OlchanskiForumTMFeRpcHandlerInterface::HandleEndRun when running offline on a Midas file
> The question was about the TMFeRpcHandlerInterface, not the TARunObject interface.  Derived classes of TARunObject do indeed work as expected in our 
> environment.  We have worked around the issue by using an implementation of TARunObject as well as the (separate) implementation of 
> TMFeRpcHandlerInterface.

then I do not understand the question. TMFeRpcHandlerInterface stuff is only used when running online and connected to MIDAS. How does it come into the 
picture when you analyze a data file offline? ProcessMidasOnlineTmfe() does not run, the RpcHandler object is not constructed.

maybe if you point me to your source code, I can see what you are doing?

K.O.
  3001   25 Mar 2025 Konstantin OlchanskiSuggestionimproved find_package behaviour for Midas
> https://cmake.org/cmake/help/latest/guide/using-dependencies/index.html#guide:Using%20Dependencies%20Guide

thank you for providing a link to latest cmake find_package() guide.

I notice that this documentation was added in cmake 3.24 released circa Nov 2022
and does not exist in older versions. (It is easy to see in git history the last
time I touched any cmake stuff in midas).

I still see no documentation on "this is how you write a package that other people can import
using find_package()". Only see documentation on how to use find_package() on packages
that somebody who somehow knows how to do it already wrote.

> In the original message to this thread i posted reference to PR
> (https://bitbucket.org/tmidas/midas/pull-requests/48)

this pull request was rail-roaded through during the holidays without any
discussion on this forum. I was not given an opportunity to comment to it,
it was pushed and merged faster than I could blink.

bottom line. I voted against using cmake and was over-ruled. To me this cmake stuff
is only a source of wasted time and created bad feelings.

if midas is required to use cmake, we should have somebody on the team that at least
understands it and if not love it, at least does not hate it.

K.O.
  3002   25 Mar 2025 Konstantin OlchanskiBug Reportplease fix compiler warning
Confirming warning is fixed. Thanks! K.O.

> > Unnamed person who added this clever bit of c++ coding, please fix this compiler warning. Stock g++ on Ubuntu LTS 24.04. Thanks in advance!
> > 
> > /home/olchansk/git/midas/src/system.cxx: In function ‘std::string ss_execs(const char*)’:
> > /home/olchansk/git/midas/src/system.cxx:2256:43: warning: ignoring attributes on template argument ‘int (*)(FILE*)’ [-Wignored-attributes]
> >  2256 |    std::unique_ptr<FILE, decltype(&pclose)> pipe(popen(cmd, "r"), pclose);
> > 
> > K.O.
> 
> Replace the code with:
> 
>    auto pclose_deleter = [](FILE* f) { pclose(f); };
>    auto pipe = std::unique_ptr<FILE, decltype(pclose_deleter)>(
>       popen(cmd, "r"),
>       pclose_deleter
>    );
> 
> 
> Hope this is now warning-free.
> 
> Stefan
  3003   25 Mar 2025 Konstantin OlchanskiBug Reportplease fix mscb compiler warning
> I hopefully fixed the waring (narrowing down from size_t to int). Please double check with your compiler.

Nope, it turns out complain is about the read() size argument, they really really really want it to be 
size_t.

Looks like correct sequence is:

size_t size = file_size_somehow();
char* buf = (char*)malloc(size+1);
ssize_t rd = read(fd, buf, size);
if (rd < 0) { complain(); return; } // read() error
if ((size_t)rd != size) { complain(); return; } // cast to size_t is safe after we know rd >= 0
buf[rd] = 0; // make sure data is NUL terminated

What a mess. I want my UNIX V7 and K&R C back!

Perhaps I should add in system.cxx - ss_read_file(const char* filename, std::vector<char> &data);

Fixed it in mscb.cxx, odbedit.cxx, mhttpd.cxx and msequencer.cxx, commited, pushed.

U-24 builds without warnings. fwew...

K.O.
  3004   25 Mar 2025 Konstantin OlchanskiBug ReportDefault write cache size for new equipments breaks compatibility with older equipments
>  > All this is kind of reasonable, as only two settings of write cache size are useful: 0 to 
> > disable it, and 10 Mbytes to limit semaphore locking rate to reasonable value for all event 
> > rate and size values practical on current computers.
> 
> Indeed KO is correct that only 0 and 10MB make sense, and we cannot mix it. Having the cache setting in the equipment table is 
> cumbersome. If you have 10 slow control equipment (cache size zero), you need to add many zeros at the end of 10 equipment 
> definitions in the frontend. 
> 
> I would rather implement a function or variable similar to fEqConfWriteCacheSize in the tmfe framework also in the mfe.cxx 
> framework, then we need only to add one line llike
> 
> gEqConfWriteCacheSize = 0;
> 
> in the frontend.cxx file and this will be used for all equipments of that frontend. If nobody complains, I will do that in April when I'm 
> back from Japan.

Cache size is per-buffer. If different equipments write into different event buffers, should be possible to set different cache sizes.

Perhaps have:

set_write_cache_size("SYSTEM", 0);
set_write_cache_size("BUF1", bigsize);

with an internal std::map<std::string,size_t>; for write cache size for each named buffer

K.O.
  3006   28 Mar 2025 Konstantin OlchanskiForumTMFeRpcHandlerInterface::HandleEndRun when running offline on a Midas file
I do not understand what you are doing. If you are offline, there is no TMFE singleton instance,
there is nothing TMFeRpcHandlerInterface to attach to, there is nobody to call TMFeRpcHandlerInterface methods.

Maybe what you are asking for is a mode where you analyze data from a file, but you want your analysis code
to think that it is online and that it is analyzing live data. This requires creating a fake TMFE singleton, attaching
TMFeRpcHandlerInterface to this fake TMFE singleton and using ProcessMidasOnlineTmfe(), driven by all this fake stuff:
a fake OpenBuffer() that actually opens a file, a fake ReceiveEvent() that actually reads from a file, fake callbacks
for begin and end run, etc.

That's a lot of work, but for what purpose? What is it about the existing offline and online modes that you do not like
and how all this fake stuff will make it better for you?

P.S. This is a 3rd version of my reply. Wrote and deleted 2 version. I think I completely do not understand
what you are doing and you completely do not understand what I am saying. Communication is not happening.

P.P.S. Simplest if you show me your code (email, elog), I am quite good at reading code and divining what
people are trying to do. You do not have to show me any of your secret secret stuff.

K.O.

> This was exactly the question, should I expect it to run?  There's no point in the HandleBinaryRpc method offline, but there's an argument that the HandleBeginRun/HandleEndRun methods have a use.
> I have the answer and we have a workaround, thanks.
> 
> > then I do not understand the question. TMFeRpcHandlerInterface stuff is only used when running online and connected to MIDAS. How does it come into the 
> > picture when you analyze a data file offline? ProcessMidasOnlineTmfe() does not run, the RpcHandler object is not constructed.
> > 
> > maybe if you point me to your source code, I can see what you are doing?
> > 
> > K.O.
  3007   28 Mar 2025 Konstantin OlchanskiInfomjsroot added
I need to look at histograms inside a ROOT file, but all the old ways for doing this no longer work. (in theory I can scp the ROOT file to 
the computer I am sitting in front of, but this assumes I have a working ROOT there. anyhow it is pointless to fight this, all modern 
packages are written to only work on the developer's laptop).

- root new TBrowser starts a web server, tries to open firefox (and fails)
- root --web=off new TBrowser using ssh X11 tunnel no longer works, ROOT X11 graphics refresh is broken
- macos root binary kit is built without X11 support, root --web=off does not work at all
- root7 recommended "rootssh" prints an error message (and fails)

What does work well is JSROOT which we use to look at manalyzer live histograms (through apache and mhttpd web proxies).

So I wrote mjsroot.exe. It opens a ROOT file and starts JSROOT to look at it (plus a bit of dancing around to make it actually work):

mjsroot.exe -R8082 root_output_files/output00371.root

To actually see the histograms:

a) if you sitting in front of the same computer, open http://localhost:8082
b) if you are somewhere else, start an ssh tunnel: ssh daq13 -L8082:localhost:8082, open http://localhost:8082
c) if daq13 is running mhttpd, setup http proxy:
set ODB /webserver/proxy/mjsroot to http://localhost:8082
open https://daq13.triumf.ca/proxy/mjsroot/
also
set ODB /alias/mjsroot to "/proxy/mjsroot/"
reload MIDAS status page, observe "mjsroot" in listed in the left-hand side, open it.

K.O.
 
  3008   28 Mar 2025 Konstantin OlchanskiBug Fixmidas cmake update
MIDAS git tag midas-2025-01-a introduced an incompatible change to "include midas-targets.cmake". Instead of "midas" one now has to 
say "midas::midas", as updated below. K.O.

> 
> #
> # CMakeLists.txt for alpha-g frontends
> #
> 
> cmake_minimum_required(VERSION 3.12)
> project(agdaq_frontends)
> 
> include($ENV{MIDASSYS}/lib/midas-targets.cmake)
> 
> add_compile_options("-O2")
> add_compile_options("-g")
> #add_compile_options("-std=c++11")
> add_compile_options(-Wall -Wformat=2 -Wno-format-nonliteral -Wno-strict-aliasing -Wuninitialized -Wno-unused-function)
> add_compile_options("-DTMFE_REV0")
> add_compile_options("-DOS_LINUX")
> 
> add_executable(feevb feevb.cxx TsSync.cxx)
> target_link_libraries(feevb midas::midas)
> 
> add_executable(fectrl fectrl.cxx GrifComm.cxx EsperComm.cxx JsonTo.cxx KOtcp.cxx $ENV{MIDASSYS}/src/tmfe_rev0.cxx)
> target_link_libraries(fectrl midas::midas)
> 
> #end
  3009   28 Mar 2025 Konstantin OlchanskiSuggestionimproved find_package behaviour for Midas
I figured out the breakage, added a git tag to identify where the cmake incompatible change was made (roughly) 
and posted a note on how to fix it. Please reimburse me for the 2 hours I had to spend on this instead of doing 
useful work. K.O.
  3010   28 Mar 2025 Konstantin OlchanskiBug Fixmanalyzer -R8082 --jsroot
When processing MIDAS files offline, JSROOT did not work, -Rxxx worked, http 
connection would open, but would not serve any histograms. This should now be 
fixed.

In addition, normally, after processing all input MIDAS files, manalyzer would 
exit, JSROOT would abruptly stop. To look at final results one had to open the 
ROOT files using some other method (roody, TBrowser, mjsroot, etc).

I now added a command line switch "--jsroot", if supplied, after processing all 
input MIDAS files, manalyzer will keep running in the JSROOT server mode (same as 
mjsroot).

"manalyzer -R8082 --jsroot run*.mid.lz4" now does something useful: open 
http://localhost:8082 (or ssh tunnel or mhttpd proxy per my mjsroot message) and 
watch histograms fill in real time, after analysis finishes, keep looking at the 
final results until bored. stop manalyzer using Ctrl-C. (we should add a "Stop 
JSROOT" botton to the JSROOT main page).

MIDAS commit 1d0d6448c3ec4ffd225b8d2030fe13e379fcd007

K.O.
  3011   30 Mar 2025 Konstantin OlchanskiBug Fixmanalyzer improvements
updated manalyzer:

- similar to --jsroot switch, in online mode, the ROOT output file remains open after run is stopped. Previously, after run was 
stopped, all histograms & etc would disappear from JSROOT, making it hard to look at the full collected and analyzed data.

- there was a buglet in the multithreading code, if some module cannot analyze flow events as fast as we can read data from disk, 
the flow event queue of the first module thread would grow and grow and grow infinitely, potentially consume lots of RAM. This is 
because control of queue size for the first module thread was disabled to avoid a deadlock. I now added the queue size check to the 
main event loop (both offline mode and online mode) and this problem should now be fixed.

- also adjusted the default queue size from 100 to 1000 and queue-full wait sleep time from 100 us to 10 us.

- another buglet was in the flow event processing. per the README, module EndRun() should not generate flow events (instead, they 
should be generated in PreEndRun()). Previously this was not enforced, now there is an error message about this and the offending 
flow events are deleted. (they were not being processed anyway).

K.O.
  3017   01 Apr 2025 Konstantin OlchanskiBug FixODB and event buffer - release semaphore before abort() and core dump
There is a long standing problem with ODB and event buffers. If they detect an 
internal data inconsistency and cannot continue running, they call abort() to 
dump core and stop.

Problem is in some code paths, they do this while holding the ODB or event 
buffer semaphore. (Linux kernel automatically releases SYSV semaphores after 
core dump is finished and program holding them is stopped).

If core dump takes longer than 10 seconds (for whatever reason, but we see this 
often enough), all other programs that wait for ODB or event buffer access, will 
also timeout and also crash (with core dump). Result is a core dump storm, at 
the end all MIDAS programs are crashed. (Luckily recovery is easy, simply 
restart everything).

Now I realize that in many situation, we do not need to hold the semaphore while 
dumping core - the content of ODB and event buffer shared memories is not 
important for debugging the crash - and it is safe to release the semaphore 
before calling abort().

This is now implemented for ODB and event buffers. Hopefully core dump storms 
will not happen again.

commit 96369c29deba1752fd3d25bed53e6594773d7e1a
release ODB semaphore before calling abort() to dump core. if core dump takes 
longer than 10 sec all other midas programs will timeout and crash.

commit 2506406813f1e7581572f0d5721d3761b7c8e8dd
unlock event buffer before calling abort() in bm_validate_client_index_locked(), 
refactor bm_get_my_client_locked()


K.O.
  3018   01 Apr 2025 Konstantin OlchanskiBug ReportODB corruption
We see ODB corruption crashes in the DS20k vertical slice MIDAS instance.

Crash is memset() called by db_delete_key1() called by cm_connect_experiment().

I look at the source code and I see that ODB pkey and hkey validation is absent
from most iterators and it is possible for "bad" pkey to cause corruption. Many
other places in the ODB code use db_get_pkey() and db_validate_hkey() to prevent
invalid data from causing further corruption and breakage.

Also db_delete_key1() needs to be refactored and renamed db_delete_key_wlocked().

I will not do this immediately today, but hopefully next week or so.

Stack trace is attached, observe how free_data() was called on a completely invalid pkey,
bad pkey->type, bad pkey sizes, etc.

#0  __memset_avx512_unaligned_erms () at ../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S:250
250	../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S: No such file or directory.
(gdb) bt
#0  __memset_avx512_unaligned_erms () at ../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S:250
#1  0x00005ad4102b4217 in memset (__len=<optimized out>, __ch=0, __dest=<optimized out>) at /usr/include/x86_64-linux-
gnu/bits/string_fortified.h:59
#2  free_data (pheader=pheader@entry=0x75aaea4f4000, address=0x75aaed4cea50, size=<optimized out>, caller=caller@entry=0x5ad4102ffb6c 
"db_delete_key1") at /home/dsdaqdev/packages_common/midas/src/odb.cxx:513
#3  0x00005ad4102b6a5b in free_data (caller=0x5ad4102ffb6c "db_delete_key1", size=<optimized out>, address=<optimized out>, 
pheader=0x75aaea4f4000) at /home/dsdaqdev/packages_common/midas/src/odb.cxx:453
#4  db_delete_key1 (hDB=1, hKey=<optimized out>, level=<optimized out>, follow_links=0) at 
/home/dsdaqdev/packages_common/midas/src/odb.cxx:3789
#5  0x00005ad4102b6979 in db_delete_key1 (hDB=1, hKey=288672, level=0, follow_links=0) at 
/home/dsdaqdev/packages_common/midas/src/odb.cxx:3731
#6  0x00005ad4102cc923 in db_create_record (hDB=hDB@entry=1, hKey=hKey@entry=0, orig_key_name=orig_key_name@entry=0x7ffd75987280 
"/Programs/ODBEdit", init_str=<optimized out>) at /home/dsdaqdev/packages_common/midas/src/odb.cxx:12916
#7  0x00005ad4102cca73 in db_create_record (hDB=hDB@entry=1, hKey=hKey@entry=0, orig_key_name=orig_key_name@entry=0x7ffd75987280 
"/Programs/ODBEdit", init_str=<optimized out>) at /home/dsdaqdev/packages_common/midas/src/odb.cxx:12942
#8  0x00005ad4102a00ba in cm_set_client_info (hDB=1, hKeyClient=0x7ffd75987420, host_name=0x5ad412262ee0 "dsdaqgw.triumf.ca", 
client_name=0x7ffd759874c0 "ODBEdit", hw_type=<optimized out>, password=<optimized out>, watchdog_timeout=<optimized out>)
    at /usr/include/c++/11/bits/basic_string.h:194
#9  0x00005ad4102a902b in cm_connect_experiment1 (host_name=<optimized out>, host_name@entry=0x7ffd759876c0 "", 
default_exp_name=default_exp_name@entry=0x7ffd759876a0 "vslice", client_name=client_name@entry=0x5ad4102f28fa "ODBEdit", 
func=func@entry=0x0, 
    odb_size=odb_size@entry=1048576, watchdog_timeout=<optimized out>, watchdog_timeout@entry=10000) at 
/usr/include/c++/11/bits/basic_string.h:194
#10 0x00005ad41027e58d in main (argc=3, argv=0x7ffd759881e8) at /home/dsdaqdev/packages_common/midas/progs/odbedit.cxx:3025
(gdb) up
...
#4  db_delete_key1 (hDB=1, hKey=<optimized out>, level=<optimized out>, follow_links=0) at 
/home/dsdaqdev/packages_common/midas/src/odb.cxx:3789
3789	            free_data(pheader, (char *) pheader + pkey->data, pkey->total_size, "db_delete_key1");
(gdb) p pkey
$1 = (KEY *) 0x75aaea53b400
(gdb) p *pkey
$2 = {type = 1684370529, num_values = 0, name = '\000' <repeats 16 times>, "xQ\375\002\004\000\000\000\004\000\000\000\a\000\000", data = 0, 
total_size = 290944, item_size = 1743544378, access_mode = 0, notify_count = 0, next_key = 15, parent_keylist = 1, 
  last_written = 1953785965}
(gdb) 

K.O.
  3019   01 Apr 2025 Konstantin OlchanskiSuggestionSequencer ODBSET feature requests
> ODBSET "/Path/value[1,3,5]"
> ODBSET "/Path/value[1-5,7-9]"

we support this array index syntax in several places,
specifically, in javascript odb get and set mjsonrpc RPCs.

> SET GOODCHANNELS, "1-5,7,9"; ODBSET "/Path/value[$GOODCHANNELS]"
> SET BADCHANNELS, "6,8"; ODBSET "/Path/value[!$BADCHANNELS]"
> ODBSET "/Path/value[0-100, except $BADCHANNELS]"

this is very clever syntax, but I have not seen any programming
language actually implement it (not even perl).

there must be a good reason why nobody does this. probably we should not do it either.

but as Stefan said (and my opinion), the route of extending MIDAS sequencer
language until it becomes a superset of python, perl, tcl, bash, javascript
and algol is not a sustainable approach. I once looked at using LUA for this,
but I think basing off an full featured programming language like python
is better.

K.O.
ELOG V3.1.4-2e1708b5