ID |
Date |
Author |
Topic |
Subject |
2617
|
06 Oct 2023 |
Konstantin Olchanski | Info | default midas history switched to "FILE" and "PerVariable" history | We are very happy with the "FILE" implementation of MIDAS history and it is time
to make it the default for new experiments. This history driver works best if
"per variable" history is alos enabled. (SQL history already only works in "per-
variable" mode). commit 676051b3024965bd8a04da112965a141d5f61a39
K.O. |
2618
|
06 Oct 2023 |
Konstantin Olchanski | Info | new history panel editor | the new history panel editor has been activated. it is meant to work the same as
the old editor, with some improvements to the history variables selection page.
this new version is written in html+javascript and it will be easier to improve,
update and maintain compared to the old version written in c++. the old history
panel editor is still usable and accessible by pressing the "edit in old editor"
button. please report any problem, quirks and improvements in this thread or in
the bitbucket bug reports. K.O. |
2625
|
13 Nov 2023 |
Konstantin Olchanski | Forum | mlogger does not HAVE_ROOT | > I am setting up Midas (v2.1) for a new experiment. We want to save the data in the ROOT format. We installed ROOT from source (v6.28/06), and ROOTSYS is set. When we compile Midas, it says that it found ROOT. We set up a second logger channel where we set the filename to run%05d.root, the format to ROOT, and the output to ROOT. Nevertheless, when starting a run, the logger writes the error that "channel '1' requested ROOT output, but mlogger is built without HAVE_ROOT". From the CMake file, I would assume that it is set automatically if ROOT is found. Do you have any idea why the mlogger does not find ROOT or save the data in the ROOT format?
when you build midas using "make cmake", it prints information about packages that it finds (or does not). please post this here. it would be even more helpful if you post the whole output of "make cmake" (make cmake >& make.log, post make.log here as attachment).
historically, this problem has been a major annoyance over the years, mlogger would not find ROOT when needed, will find the wrong ROOT when not needed or ROOT at run time will be different from ROOT at build time. "cmake" has been of no help in improving on this, only made all debugging more difficult.
K.O. |
2627
|
14 Nov 2023 |
Konstantin Olchanski | Forum | mlogger does not HAVE_ROOT | > Finally, make sure you start "rmlogger" and not "mlogger". Only "rmlogger" contains the ROOT binding.
Stefan is right. I forgot this. As solution to our troubles, mlogger is built without root support. use rmlogger instead.
K.O. |
2661
|
27 Dec 2023 |
Konstantin Olchanski | Forum | MidasWiki updated to 1.39.6 | MidasWiki was updated to current mediawiki LTS 1.39.6 supported until Nov 2025,
see https://www.mediawiki.org/wiki/Version_lifecycle
as downside, after this update, I see large amounts of "account request" spam,
something that did not exist before. I suspect new mediawiki phones home to
subscribe itself to some "please spam me" list.
if you want a user account on MidasWiki, please email me or Stefan directly, we
will make it happen.
K.O. |
2662
|
29 Dec 2023 |
Konstantin Olchanski | Bug Report | Compilation error on RPi | > git pull
> git submodule update
confirmed. just run into this myself. I think "make" should warn about out of
date git modules. Also check that the build git version is tagged with "-dirty".
K.O. |
2663
|
02 Jan 2024 |
Konstantin Olchanski | Forum | midas.triumf.ca alias moved to daq00.triumf.ca | the DNS alias for midas.triumf.ca moved from old ladd00.triumf.ca to new
daq00.triumf.ca. same as before it redirects to the MidasWiki and to the midas
forum (elog) that moved from ladd00 to daq00 quite some time ago. if you see any
anomalies in accessing them (broken links, bad https certificates), please report
them to this forum or to me directly at olchansk@triumf.ca. K.O. |
2666
|
03 Jan 2024 |
Konstantin Olchanski | Forum | midas.triumf.ca alias moved to daq00.triumf.ca | > I found the first issue: The link to
> https://midas.triumf.ca/MidasWiki/index.php/Custom_plots_with_mplot
fixed.
https://midas.triumf.ca/Custom_plots_with_mplot
also works.
I tried to get rid of redirect to daq00 completely and make the whole MidasWiki show up
under midas.triumf.ca, but discovered/remembered that I cannot do this without changing
MidasWiki config [$wgServer = "https://daq00.triumf.ca";] which causes mediawiki to
redirect everything to daq00 (using the 301 "moved permanently" reply, ouch!). In theory,
if I change it to "https://midas.triumf.ca" it will redirect everything there instead,
but I am hesitant to make this change. It has been like this since forever and I have no
idea what else will break if I change it.
K.O. |
2687
|
28 Jan 2024 |
Konstantin Olchanski | Forum | number of entries in a given ODB subdirectory ? | Very good question. It exposes a very nasty problem, the race condition between "ls" and "rm". While you are
looping over directory entries, somebody else is completely permitted to remove one of the files (or add more
files), making the output of "ls" incorrect (contains non-existant/removed files, does not contain newly added
files). even the simple count of number of files can be wrong.
Exactly the same problem exists in ODB. As you loop over directory entries, some other ODB client can remove or
add new entries.
To help with this, I considered adding an db_ls() function that would take the odb lock, atomically iterate over
a directory and return an std::vector<std::string> with names of all entries. (current odb iterator returns ODB
handles that may be invalid if corresponding entry was removed while we were iterating). Unfortunately the
delete/add race condition remains, some returned entries may be invalid or missing.
For your specific application, you can swear that you will never add/delete files "at the wrong time", and you
will not see this problem until one of your users writes a script that uses odbedit to add/remove subdirectory
entries exactly at the wrong time. (you run your "ls" in the BeginRun() handler of your frontend, they run their
"rm" from their's, so both run at the same time, a race condition.
Closer to your question, I think it is simplest to always iterate over the subdirectory, collect names of all
entries, then work with them:
std::vector<std::string> names;
iterate over odb {
names.push_back(name);
}
foreach (name in names)
work_on(name);
instead of:
size_t n = db_get_num_entries();
for (size_t i=0; i<n; i++) {
std::string name = sprintf("FPGA%d", i);
work_on(name);
}
K.O.
> Dear MIDAS experts,
>
> - I have a detector configuration with a variable number of hardware components - FPGA's receiving data
> from the detector. They are described in ODB using a set of keys ranging
> from "/Detector/FPGAs/FPGA00" .... to "/Detector/FPGAs/FPGA68".
> Each of "FPGAxx" corresponds to an ODB subdirectory containing parameters of a given FPGA.
>
> The number of FPGAs in the detector configuration is variable - [independent] commissioning
> of different detector subsystems involves different number of FPGAs.
>
> In the beginning of the data taking one needs to loop over all of "FPGAxx",
> parse the information there and initialize the corresponding FPGAs.
>
> The actual question sounds rather trivial - what is the best way to implement a loop over them?
>
> - it is certainly possible to have the number of FPGAs introduced as an additional configuration parameter,
> say, "/Detector/Number_of_FPGAs", and this is what I have resorted to right now.
>
> However, not only that loooks ugly, but it also opens a way to make a mistake
> and have the Number_of_FPGAs, introduced separately, different from the actual number
> of FPGA's in the detector configuration.
>
> I therefore wonder if there could be a function, smth like
>
> int db_get_n_keys(HNDLE hdb, HNDLE hKeyParent)
>
> returning the number of ODB keys with a common parent, or, to put it simpler,
> a number of ODB entries in a given subdirectory.
>
> And if there were a better solution to the problem I'm dealing with, knowing it might be helpful
> for more than one person - configuring detector readout may require to deal with a variable number
> of very different parameters.
>
> -- many thanks, regards, Pasha |
2688
|
28 Jan 2024 |
Konstantin Olchanski | Bug Report | Warnings about ODB keys that haven't been touched for 10+ years | > I don't immediately see a reason for saying that if a DB key is older than 10 yrs, it may not be valid.
>
> However, it would be worth learning what was the logic behind choosing 10 yrs as a threshold.
> If 10 is just a more or less arbitrary number, changing 10 --> 100 seems to be the way to go.
Please run "git blame" to find out who added that check.
If I remember right, it was added to complain/correct dates in the future.
I think the oldest experiment at TRIUMF where we still can load an odb into current MIDAS is TWIST,
now about 25 years old. the purpose of loading odb would be to test the history function
to see if we can look at 10-15 year old histories. (TWIST history is in the latest FILE format,
so it will load).
I think this age check should be removed, but there must be *some* check for invalid/bogus timestamps. Or
not, we should check if MIDAS cares about timestamps at all, if ODB functions never use/look at timestamp,
maybe we are okey with bogus timestamps. They may look funny in the odb editor, but that's it.
K.O. |
2689
|
28 Jan 2024 |
Konstantin Olchanski | Forum | History tags | > This part of the system has been designed by KO, so he should reply here.
That's right. Some of this stuff is historical gibberish that is no longer needed
for FILE and SQL histories.
/History/Events is needed to create persistent mapping between history event names
and history event id's (at some point history event id was same equipment event
id, with the obvious problems when equipment event ids are duplicated, reused,
renamed, deleted).
/History/Tags was used by the history editor to speed up "give me all tag names
for this history event name". With the "MIDAS" history storage this required
reading a lot of data from disk. With the "FILE" history and cached ZFS SSD, disk
access is much cheaper and caching history event names and tags in odb is no
longer necessary.
/History/Tags should probably be removed (be check that nobody uses it first).
/History/Events has to remain as long as "MIDAS" history storage is still used.
K.O. |
2690
|
28 Jan 2024 |
Konstantin Olchanski | Forum | a scroll option for "add history variables" window? | > > Have you updated to the current midas version? This issue has been fixed a while ago.
>
> Hi Stefan, thanks a lot! I pulled from the head, and the scrolling works now. -- regards, Pasha
Right, I remember running into this problem, too.
If you have some ideas on how to better present 100500 history variables, please shout out!
K.O. |
2691
|
28 Jan 2024 |
Konstantin Olchanski | Forum | dump history FILE files | $ cat mhf_1697445335_20231016_run_transitions.dat
event name: [Run transitions], time [1697445335]
tag: tag: /DWORD 1 4 /timestamp
tag: tag: UINT32 1 4 State
tag: tag: UINT32 1 4 Run number
record size: 12, data offset: 1024
...
data is in fixed-length record format. from the file header, you read "record size" is 12 and data starts at offset 1024.
the 12 bytes of the data record are described by the tags:
4 bytes of timestamp (DWORD, unix time)
4 bytes of State (UINT32)
4 bytes of "Run number" (UINT32)
endianess is "local endian", which means "little endian" as we have no big-endian hardware anymore to test endian conversions.
file format is designed for reading using read() or mmap().
and you are right mhdump, does not work on these files, I guess I can write another utility that does what I just described and spews the numbers to stdout.
K.O. |
2692
|
28 Jan 2024 |
Konstantin Olchanski | Forum | slow control frontends - how much do they sleep and how often their drivers are called? | > I have implemented a number of slow control frontends which are directed to update the
> history once in every 10 sec, and they do just that.
I suggest that you switch from the old mfe.c frontend framework to the new tmfe framework that was
designed to solve exactly this type of problems.
Look at .../midas/progs/tmfe_example*.cxx
You have a choice of:
- single threaded frontend, most robust, no race conditions, but readout is interrupted during
begin/end of run.
- two-threaded frontend, your periodic equipments run in one thread, midas loop and rpc run in a
different thread, you have to handle locking yourself.
- you can run each of your equipments in it's own thread without help from the framework, it is
obvious how to do it if you can program c++ threads, "new std::thread" to create/start a thread,
stop threads using a binary flag, thread->join() to reap them at the end (or thread sanitizer will
complain).
K.O. |
2693
|
28 Jan 2024 |
Konstantin Olchanski | Forum | the logic of handling history variables ? | MIDAS history is very simple:
from your frontend, your write your history data to ODB /eq/xxx/variables (see below)
mlogger has a hotlink to all /eq/*/variables and it will "see" the new data, write it to history file (see below)
you should see the history file grow using "ls"
history web page in your browser sends a "give me more data" JSON-RPC request to mhttpd
mhttpd looks at the history file, if there is new data (file got bigger) it send it to the web page
web page shows the new data.
where things usually go wrong:
- mlogger only looks for new history variables on startup and on begin-of-run. if you add new stuff in your frontend, you
will not see it until you restart mlogger or start a new run.
- mlogger only looks at history data if corresponding "/eq/xxx/common/log history" is non-zero. for best effect, set it to
"1". (or "0" to turn history off).
- history file is not growing, likely mlogger does not "see" your new data
- timestamps of stuff in /eq/xxx/variables are not getting updated, likely frontend is not writing them, and there is no
new data for mlogger to "see" and write to file.
Frontend has several ways of writing to /eq/xxx/variables:
- write to ODB directly using ODB API db_set_data(), mvodb->Wx(), etc. this is the most foolproof method. use in
conjunction with a printf() statement to make sure you actually do write to ODB. Sometimes your frontend event loop fails
to run, a bug/failure that has nothing to do with midas history.
- generate a midas event and set the per-equipment "write event to ODB" flag (RO_ODB for mfe.c frontends), the mfe/tmfe
framework will write event data to ODB, each data bank will be written to /eq/xxx/variables/BANKNAME, data type is taken
from the event data bank definition.
This second method sometimes malfunctions, typical problems are missing RO_ODB in the equipment table, equipment table in
ODB overwriting the value in source code (this is confusing in mfe.c frontends).
Least likely failure is "/eq/xxx/common/log history" set to bogus value. Normal values are 0=history disabled, 1=history
enabled, other values are only needed if you do not want mlogger to record history as often as you generate it, i.e. you
update /eq/xxx/variables every 1/sec, but you want mlogger to only record it 1/minute.
I hope this helps.
P.S. I notice your equipment tables do not have RO_ODB, so if you use the 2nd method, write history via event data banks,
it will not work.
K.O.
> Dear MIDAS developers,
>
> I'm trying to understand handling of the history (slow control) variables in MIDAS,
> and it seems that the behavior I'm observing is somewhat counterintuitive.
> Most likely, I just do not understand the implemented logic.
>
> As it it rather difficult to report on the behavior of the interactive program,
> I'll describe what I'm doing and illustrate the report with the series of attached
> screenshots showing the history plots and the status of the run control at different
> consecutive points in time.
>
> Starting with the landscape:
>
> - I'm running MIDAS, git commit=30a03c4c (the latest, as of today).
>
> - I have built the midas/examples/slowcont frontend with the following modifications.
> (the diffs are enclosed below):
>
> 1) the frequency of the history updates is increased from 60sec/10sec to 6sec/1sec
> and, in hope to have updates continuos, I replaced (RO_RUNNING | RO_TRANSITIONS)
> with RO_ALWAYS.
>
> 2) for convenience of debugging, midas/drivers/nulldrv.cxx is replaced with its clone,
> which instead of returning zeroes in each channel, generates a sine curve:
>
> V(t) = 100*sin(t/60)+10*channel
>
> - an active channel in /Logger/History is chosen to be FILE
>
> - /History/LoggerHistoryChannel is also set to FILE
>
> - I'm running mlogger and modified, as described, 'scfe' frontend from midas/examples/slowcont
>
> - the attached history plots include three (0,4 and 7) HV:MEASURED channels
>
>
> Now, the observations:
>
> 1) the history plots are updated only when a new run starts, no matter how hard
> I'm trying to update them by clicking on various buttons.
>
> The attached screenshots show the timing sequence of the run control states
> (with the times printed) and the corresponding history plots.
>
> The "measured voltages" change only when the next run starts - the voltage graphs
> break only at the times corresponding to the vertical green lines.
>
> 2) No matter for how long I wait within the run, the history updates are not happening.
>
> 3) if the time difference between the two run starts gets too large,
> the plotted time dependence starts getting discontinuities
>
> 4) finally, if I switch the logging channel from FILE to MIDAS (activate the MIDAS
> channel in /Logger/History and set /History/LoggerHistoryChannel to MIDAS),
> the updates of the history plots simply stop.
>
> MIDAS feels as a great DAQ framework, so I would appreciate any suggestion on
> what I could be doing wrong. I'd also be happy to give a demo in real time
> (via ZOOM/SKYPE etc).
>
> -- much appreciate your time, thanks, regards, Pasha
>
> ------------------------------------------------------------------------------
> diff --git a/examples/slowcont/scfe.cxx b/examples/slowcont/scfe.cxx
> index 11f09042..c98d37e8 100644
> --- a/examples/slowcont/scfe.cxx
> +++ b/examples/slowcont/scfe.cxx
> @@ -24,9 +24,10 @@
> #include "mfe.h"
> #include "class/hv.h"
> #include "class/multi.h"
> -#include "device/nulldev.h"
> #include "bus/null.h"
>
> +#include "nulldev.h"
> +
> /*-- Globals -------------------------------------------------------*/
>
> /* The frontend name (client name) as seen by other MIDAS clients */
> @@ -74,11 +75,11 @@ EQUIPMENT equipment[] = {
> 0, /* event source */
> "FIXED", /* format */
> TRUE, /* enabled */
> - RO_RUNNING | RO_TRANSITIONS, /* read when running and on transitions */
> - 60000, /* read every 60 sec */
> + RO_ALWAYS, /* read when running and on transitions */
> + 6000, /* read every 6 sec */
> 0, /* stop run after this event limit */
> 0, /* number of sub events */
> - 10000, /* log history at most every ten seconds */
> + 1000, /* log history at most every one second */
> "", "", ""} ,
> cd_hv_read, /* readout routine */
> cd_hv, /* class driver main routine */
> @@ -93,8 +94,8 @@ EQUIPMENT equipment[] = {
> 0, /* event source */
> "FIXED", /* format */
> TRUE, /* enabled */
> - RO_RUNNING | RO_TRANSITIONS, /* read when running and on transitions */
> - 60000, /* read every 60 sec */
> + RO_ALWAYS, /* read when running and on transitions */
> + 6000, /* read every 6 sec */
> 0, /* stop run after this event limit */
> 0, /* number of sub events */
> 1, /* log history every event as often as it changes (max 1 Hz) */
> ------------------------------------------------------------------------------
> [test_001]$ diff ../midas/examples/slowcont/nulldev.cxx ../midas/drivers/device/nulldev.cxx
> 13d12
> < #include <math.h>
> 150,154c149,150
> < if (channel < info->num_channels) {
> < // *pvalue = info->array[channel];
> < time_t t = time(NULL);;
> < *pvalue = 100*sin(M_PI*t/60)+10*channel;
> < }
> ---
> > if (channel < info->num_channels)
> > *pvalue = info->array[channel];
> ------------------------------------------------------------------------------ |
2698
|
29 Jan 2024 |
Konstantin Olchanski | Forum | number of entries in a given ODB subdirectory ? | > https://chat.openai.com/share/d927c78d-9914-4413-ab5e-3b0e5d173132
>
> Please note that you never can be 100% sure that the code from a LLM is correct
yup, it's wrong allright. it should be looping until db_enum_key() returns "no more keys",
not from 0 to N. this is same as iterating over unix filesystem directory entries, opendir(),
loop readdir() until it returns EOF, closedir().
K.O. |
2699
|
29 Jan 2024 |
Konstantin Olchanski | Forum | a scroll option for "add history variables" window? | familiar situation, "too much data", you dice t or slice it, still too much. BTW, you can try to generate history
plot ODB entries from your program instead of from the history plot editor. K.O. |
2707
|
12 Feb 2024 |
Konstantin Olchanski | Info | MIDAS and ROOT 6.30 | Starting around ROOT 6.30, there is a new dependency requirement for nlohmann-json3-dev from https://github.com/nlohmann/json.
If you use a Ubuntu-22 ROOT binary kit from root.cern.ch, MIDAS build will bomb with errors: Could not find a package configuration file provided by "nlohmann_json"
Per https://root.cern/install/dependencies/ install it:
apt install nlohmann-json3-dev
After this MIDAS builds ok.
K.O. |
2709
|
13 Feb 2024 |
Konstantin Olchanski | Forum | forbidden equipment names ? | > equipment names are 'mu2edaq09:DTC0' and 'mu2edaq09:DTC1'
I think all names permitted for ODB keys are allowed as equipment names, any valid UTF-8,
forbidden chars are "/" (ODB path separator) and "\0" (C string terminator). Maximum length
is 31 byte (plus "\0" string terminator). (Fixed length 32-byte names with implied terminator
are no longer permitted).
The ":" character is used in history plot definitions and we are likely eventually change that,
history event names used to be pairs of "equipment_name:tag_name" but these days with per-variable
history, they are triplets "equipment_name,variable_name,tag_name". The history plot editor
and the corresponding ODB entries need to be updated for this. Then, ":" will again be a valid
equipment name.
I think if you disable the history for your equipments, MIDAS will stop complaining about ":" in the name.
K.O. |
2710
|
13 Feb 2024 |
Konstantin Olchanski | Bug Fix | string --> int64 conversion in the python interface ? | > > The symptoms are consistent with a string --> int64 conversion not happening
> > where it is needed.
>
> Thanks for the report Pasha. Indeed I was missing a conversion in one place. Fixed now!
>
Are we running these tests as part of the nightly build on bitbucket? They would be part of
the "make test" target. Correct python dependancies may need to be added to the bitbucket OS
image in bitbucket-pipelines.yml. (This is a PITA to get right).
K.O. |
|