24 Sep 2020, Gennaro Tortone, Forum, subrun
|
Hi,
I was wondering if there is a "mechanism" to run an executable
file after each subrun is closed...
I need to convert .mid.lz4 subrun files to ROOT (TTree) files;
Thanks,
Gennaro |
01 Dec 2020, Stefan Ritt, Forum, subrun
|
There is no "mechanism" foreseen to be executed after each subrun. But you could
run a shell script after each run which loops over all subruns and converts them
one after the other.
Stefan
> Hi,
>
> I was wondering if there is a "mechanism" to run an executable
> file after each subrun is closed...
>
> I need to convert .mid.lz4 subrun files to ROOT (TTree) files;
>
> Thanks,
> Gennaro |
01 Dec 2020, Ben Smith, Forum, subrun
|
We use the lazylogger for something similar to this. You can specify the path to a custom script, and it will be run for each midas file that gets written:
https://midas.triumf.ca/MidasWiki/index.php/Lazylogger#Using_a_script
This means that you don't have to wait until the end of the run to start processing.
If the ROOT conversion is going to be slow, but you have a batch system available, you could use the lazylogger script to submit a job to the batch system for each file.
>
> > Hi,
> >
> > I was wondering if there is a "mechanism" to run an executable
> > file after each subrun is closed...
> >
> > I need to convert .mid.lz4 subrun files to ROOT (TTree) files;
> >
> > Thanks,
> > Gennaro |
30 Nov 2020, Konstantin Olchanski, Info, more wisdom from linux kernel people
|
As you may know, I am a big fan of two software projects - the linux kernel and ROOT. The linux kernel is one of
the few software projects "done right". ROOT is where normal people try to "get it right" with real-world level
of success. I use both softwares daily and I try to apply their ways and methods to MIDAS as much as I can.
So just in time for our discussion of array indexes, a talk by gregkh shows
up on slashdot. The title is "how to keep your users happy". (Nobody
ever wants to be nasty to their users, but do read his talk).
https://git.sr.ht/~gregkh/presentation-application_summit/tree/main/keep_users_happy.pdf
The talk refers to some older stuff, still relevant, of course, in case you miss the links
in the pdf file, here they are:
https://ozlabs.org/~rusty/index.cgi/tech/2008-03-30.html
https://ozlabs.org/~rusty/index.cgi/tech/2008-04-01.html
https://ozlabs.org/~rusty/ols-2003-keynote/img0.html (click on "continue" to see next page)
K.O. |
24 Nov 2020, Amy Roberts, Suggestion, ODBSET wildcards with array keys in Sequencer files
|
I'm interested in using the matching feature for ODBSET explained on
https://midas.triumf.ca/MidasWiki/index.php/Sequencer for settings that are in an
array, like:
COMMENT "Ground the detectors"
ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[?]" 0
Currently I get an error when I try to run this script. Is this expected? Would it
be possible to implement matching for array values?
Thanks! |
25 Nov 2020, Marco Francesconi, Suggestion, ODBSET wildcards with array keys in Sequencer files
|
Hi,
I guess the issue is in the "[?]" part of the command, the indexing is handled differently from the odb path and does not
support "?".
Are you trying to set only the first 9 channels?
Could you try with "[*]" or "[0-9]" instead?
Marco
> I'm interested in using the matching feature for ODBSET explained on
> https://midas.triumf.ca/MidasWiki/index.php/Sequencer for settings that are in an
> array, like:
>
> COMMENT "Ground the detectors"
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[?]" 0
>
> Currently I get an error when I try to run this script. Is this expected? Would it
> be possible to implement matching for array values?
>
> Thanks! |
25 Nov 2020, Amy Roberts, Suggestion, ODBSET wildcards with array keys in Sequencer files
|
The following all fail with "Cannot find ODB key "<key>""
ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[*]" 0
ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[0-9]" 0
ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[1]" 0
ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)*" 0
ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)" 0
> Hi,
> I guess the issue is in the "[?]" part of the command, the indexing is handled differently from the odb path and does not
> support "?".
> Are you trying to set only the first 9 channels?
> Could you try with "[*]" or "[0-9]" instead?
>
> Marco
>
> > I'm interested in using the matching feature for ODBSET explained on
> > https://midas.triumf.ca/MidasWiki/index.php/Sequencer for settings that are in an
> > array, like:
> >
> > COMMENT "Ground the detectors"
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[?]" 0
> >
> > Currently I get an error when I try to run this script. Is this expected? Would it
> > be possible to implement matching for array values?
> >
> > Thanks! |
25 Nov 2020, Marco Francesconi, Suggestion, ODBSET wildcards with array keys in Sequencer files
|
I created some keys in my ODB to try to match yours.
The ODBSET commands you wrote are all working fine (of course with different results), except only for the "/Detectors/Det*/Settings/Charge/Bias (V)*" which I will have to
look into.
In any case the error message I'm getting is "could not match ay key" and not the one you are reporting.
Now I'm a bit puzzled:
Are you sure your ODB contains those keys?
Are you testing the ODBSET inside a more complex sequencer or on its own?
Maybe I can try to reproduce it using your ODB setup.
Could you send an ODB dump of the "/Detectors" folder using the "save" command of odbedit ("cd /Detectors" and then "save detector.odb")?
Best,
Marco
> The following all fail with "Cannot find ODB key "<key>""
>
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[*]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[0-9]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[1]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)*" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)" 0
>
>
> > Hi,
> > I guess the issue is in the "[?]" part of the command, the indexing is handled differently from the odb path and does not
> > support "?".
> > Are you trying to set only the first 9 channels?
> > Could you try with "[*]" or "[0-9]" instead?
> >
> > Marco
> >
> > > I'm interested in using the matching feature for ODBSET explained on
> > > https://midas.triumf.ca/MidasWiki/index.php/Sequencer for settings that are in an
> > > array, like:
> > >
> > > COMMENT "Ground the detectors"
> > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[?]" 0
> > >
> > > Currently I get an error when I try to run this script. Is this expected? Would it
> > > be possible to implement matching for array values?
> > >
> > > Thanks! |
25 Nov 2020, Amy Roberts, Suggestion, ODBSET wildcards with array keys in Sequencer files
|
I think the issue may be the version of MIDAS I'm using. Mine is current as of February 4, 2020.
But since then there have been changes to the sequencer code, specifically parts that handle indexing.
I'll try this out with an updated version of MIDAS and report back if there are still any issues after updating.
> I created some keys in my ODB to try to match yours.
> The ODBSET commands you wrote are all working fine (of course with different results), except only for the "/Detectors/Det*/Settings/Charge/Bias (V)*" which I will have to
> look into.
> In any case the error message I'm getting is "could not match ay key" and not the one you are reporting.
>
> Now I'm a bit puzzled:
> Are you sure your ODB contains those keys?
> Are you testing the ODBSET inside a more complex sequencer or on its own?
>
> Maybe I can try to reproduce it using your ODB setup.
> Could you send an ODB dump of the "/Detectors" folder using the "save" command of odbedit ("cd /Detectors" and then "save detector.odb")?
>
> Best,
>
> Marco
>
>
> > The following all fail with "Cannot find ODB key "<key>""
> >
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[*]" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[0-9]" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[1]" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)*" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)" 0
> >
> >
> > > Hi,
> > > I guess the issue is in the "[?]" part of the command, the indexing is handled differently from the odb path and does not
> > > support "?".
> > > Are you trying to set only the first 9 channels?
> > > Could you try with "[*]" or "[0-9]" instead?
> > >
> > > Marco
> > >
> > > > I'm interested in using the matching feature for ODBSET explained on
> > > > https://midas.triumf.ca/MidasWiki/index.php/Sequencer for settings that are in an
> > > > array, like:
> > > >
> > > > COMMENT "Ground the detectors"
> > > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[?]" 0
> > > >
> > > > Currently I get an error when I try to run this script. Is this expected? Would it
> > > > be possible to implement matching for array values?
> > > >
> > > > Thanks! |
27 Nov 2020, Konstantin Olchanski, Suggestion, ODBSET wildcards with array keys in Sequencer files
|
> The following all fail with "Cannot find ODB key "<key>""
>
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[*]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[0-9]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[1]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)*" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)" 0
>
It would be cool if ODB pattern matching in the sequencer
were consistent with the ODB pattern matching in the json-rpc
interface for web pages:
https://midas.triumf.ca/MidasWiki/index.php/Mjsonrpc#Supported_array_index_syntax
K.O. |
30 Nov 2020, Marco Francesconi, Suggestion, ODBSET wildcards with array keys in Sequencer files
|
I totally agree that we should have a consistent formatting for array index expansion.
I had a look to the mjsonrpc code and I found the function parse_array_index_list(...) which does this job.
I have a similar function (adapted form previous code) in odb.cxx called strarrayindex(...) that is designed for the same "consistency" purposes between odbedit and sequencer.
Let me put few points that I noticed:
- mjsonrpc has a very different way to write the full array (no indexes given) while currently sequencer requires "[*]" to do the same (otherwise it only changes the first value of the array)
- currently the sequencer and the underlying ODB calls use two indexes (that are the same if you want to write only one key) so we will need a serious rewriting to allow something like "ODBSET array[1,3,5]"
- if I correctly understood the code, mjsonrpc instead generates a list of indices and then calls an ODB write on each of them. That's not always a good thing, for example if you are writing an array of n parameters on a DAQ
board you will call the hotlink on that key n times
- in addition to that the sequencer will also have to cope with variable-based indexes like "ODBSET array[$val]", but then how it should parse something like "[$a,1]" or "[$a*]"?
For the very first point I do not see a clean way to do this without breaking the compatibility of existing sequencers or having a difference between the two implementations.
For the others I guess we can find a way out, however that's a major modification so I will put it on my todo list when I can find some free time.
In any case I would propose to merge the two functions, so we have only to maintain a single implementation of the parsing.
I guess it's a good moment to brainstorm about that, let me know what you think
Marco
> > The following all fail with "Cannot find ODB key "<key>""
> >
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[*]" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[0-9]" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[1]" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)*" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)" 0
> >
>
> It would be cool if ODB pattern matching in the sequencer
> were consistent with the ODB pattern matching in the json-rpc
> interface for web pages:
>
> https://midas.triumf.ca/MidasWiki/index.php/Mjsonrpc#Supported_array_index_syntax
>
> K.O. |
30 Nov 2020, Konstantin Olchanski, Suggestion, ODBSET wildcards with array keys in Sequencer files
|
> I totally agree that we should have a consistent formatting for array index expansion.
> I had a look to the mjsonrpc code and I found the function parse_array_index_list(...) which does this job.
Yes, it is good to review this stuff. I think the json-rpc call should accept more array index patterns:
a[*] - whole array (even though it is unnatural use in javascript, we do not say "let a[*] = b[*]", we say "let a = b".
a[5-] - from 5th element to the end (in the case we do not know the length)
a[-10] - from first element to element 10 (this is same as a[0-10], but needed for consistency with previous case).
K.O.
> I have a similar function (adapted form previous code) in odb.cxx called strarrayindex(...) that is designed for the same "consistency" purposes between odbedit and sequencer.
>
> Let me put few points that I noticed:
> - mjsonrpc has a very different way to write the full array (no indexes given) while currently sequencer requires "[*]" to do the same (otherwise it only changes the first value of the array)
> - currently the sequencer and the underlying ODB calls use two indexes (that are the same if you want to write only one key) so we will need a serious rewriting to allow something like "ODBSET array[1,3,5]"
> - if I correctly understood the code, mjsonrpc instead generates a list of indices and then calls an ODB write on each of them. That's not always a good thing, for example if you are writing an array of n parameters on a DAQ
> board you will call the hotlink on that key n times
> - in addition to that the sequencer will also have to cope with variable-based indexes like "ODBSET array[$val]", but then how it should parse something like "[$a,1]" or "[$a*]"?
>
> For the very first point I do not see a clean way to do this without breaking the compatibility of existing sequencers or having a difference between the two implementations.
> For the others I guess we can find a way out, however that's a major modification so I will put it on my todo list when I can find some free time.
> In any case I would propose to merge the two functions, so we have only to maintain a single implementation of the parsing.
>
> I guess it's a good moment to brainstorm about that, let me know what you think
>
> Marco
>
>
> > > The following all fail with "Cannot find ODB key "<key>""
> > >
> > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[*]" 0
> > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[0-9]" 0
> > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[1]" 0
> > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)*" 0
> > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)" 0
> > >
> >
> > It would be cool if ODB pattern matching in the sequencer
> > were consistent with the ODB pattern matching in the json-rpc
> > interface for web pages:
> >
> > https://midas.triumf.ca/MidasWiki/index.php/Mjsonrpc#Supported_array_index_syntax
> >
> > K.O. |
30 Nov 2020, Stefan Ritt, Suggestion, ODBSET wildcards with array keys in Sequencer files
|
Hi Konstantin,
we are considering to make the range selection uniform among json, sequencer and
odbedit "set" command. Having multiple ranges like [1,4-5] will be quite some work, so
my question is did you just implement it on the json side because it was easy, or are
there experiments who really need it? Wouldn't it be enough to have
[*]
[n]
[n-m]
This way we always have only one db_set_data() value behind that. Any set of indices
we have to split into several db_set_data(), which especially for the front-end
configuration can cause trouble by triggering a hot link on each access.
Stefan |
30 Nov 2020, Konstantin Olchanski, Suggestion, ODBSET wildcards with array keys in Sequencer files
|
>
> we are considering to make the range selection uniform among json, sequencer and
> odbedit "set" command. Having multiple ranges like [1,4-5] will be quite some work, so
> my question is did you just implement it on the json side because it was easy, or are
> there experiments who really need it? Wouldn't it be enough to have
>
> [*]
> [n]
> [n-m]
>
It has been a long time, but most likely I designed the interface to work this
way to permit maximum flexibility for writing into an array using just one
rpc operation.
The generalized form is:
[range,range,range...]
where range is:
array index or
index1-index2 or
index2-index1 (write in reverse order)
This is all documented here:
https://midas.triumf.ca/MidasWiki/index.php/Mjsonrpc#Supported_array_index_syntax
I think it is too late to change it.
>
> This way we always have only one db_set_data() value behind that. Any set of indices
> we have to split into several db_set_data()
>
Sounds good. I think it is easy to have a common implementation:
One would need following functions:
parse range selection from string into std::vector<int> of array indices (we already have it)
call db_set_data_range() (this is easy to add).
>
> trouble by triggering a hot link on each access.
>
There is no escaping this trouble. Half the experiments want notification
"per array", the other half, "per array element". We cannot choose one or the other for them,
we have to provide a way for the user to say how they want it.
P.S. With existing ODB calls, you cannot do [n-m] ranges. You can do whole array or you
can do one-element-at-a-time.
K.O. |
17 Nov 2020, Stefan Ritt, Info, Equipment "common" settings in ODB
|
Today I addressed a topic which bugged me since long time. The ODB contains
settings under /Equipment/<name>/Common which are a "mirror" of the equipment[]
setting in a frontend (using the mfe.cxx framework). If the "Common" entry in
the ODB is not present (fresh experiment), the equipment[] settings from the
frontend are copied to the ODB. But if it exists, it takes precedence over the
equipment[] entries, which is wrong in my opinion. Like if you change some
settings in equipment[] (like the logging period of the history), then recompile
and restart the frontend, the old values in the ODB are kept and your
modification in the frontend code has no effect.
Starting on commit c3017c6c on Nov. 17th 2020 I reversed the precedence: Now, on
each start of the frontend program, the values from equipment[] are written to
the ODB. They are still "live". If one changes them when the frontend is
running, that change takes effect immediately. But on the next restart of the
frontend, the old values from equipment[] is put back there.
I fell too many times into this trap, and I hope the modification helps
everybody. If there are however experiments which rely on the fact that the
common settings in the ODB are NOT overwritten by the frontend, please let me
know and I can put a flag "EQUIPMENT_FE_PRECEDENCE = FALSE" somewhere to restore
the old behaviour.
Stefan |
20 Nov 2020, Pierre-Andre Amaudruz, Info, Equipment "common" settings in ODB
|
Indeed this "mirror" of the ODB in settings option can cause frustration in
particular when we think the ODB is empty but is not.
In the other hand, over time the settings are adjusted to a particular
configuration or touched or not by the individual run preset parameters. Later, if
a bug or code correction requires multiple restart of the fe, for every start of
the application, you loose the latest configuration. This can be frustrating as
well until you force a post-setting or report the specifics parameters in the fe
code.
BTW I believe, we originally went for the ODB priority for that specific reason.
I would be in favour for having a general flag (FALSE) in /experiment which would
define this global behaviour.
PAA
> Today I addressed a topic which bugged me since long time. The ODB contains
> settings under /Equipment/<name>/Common which are a "mirror" of the equipment[]
> setting in a frontend (using the mfe.cxx framework). If the "Common" entry in
> the ODB is not present (fresh experiment), the equipment[] settings from the
> frontend are copied to the ODB. But if it exists, it takes precedence over the
> equipment[] entries, which is wrong in my opinion. Like if you change some
> settings in equipment[] (like the logging period of the history), then recompile
> and restart the frontend, the old values in the ODB are kept and your
> modification in the frontend code has no effect.
>
> Starting on commit c3017c6c on Nov. 17th 2020 I reversed the precedence: Now, on
> each start of the frontend program, the values from equipment[] are written to
> the ODB. They are still "live". If one changes them when the frontend is
> running, that change takes effect immediately. But on the next restart of the
> frontend, the old values from equipment[] is put back there.
>
> I fell too many times into this trap, and I hope the modification helps
> everybody. If there are however experiments which rely on the fact that the
> common settings in the ODB are NOT overwritten by the frontend, please let me
> know and I can put a flag "EQUIPMENT_FE_PRECEDENCE = FALSE" somewhere to restore
> the old behaviour.
>
> Stefan |
27 Nov 2020, Konstantin Olchanski, Info, Equipment "common" settings in ODB
|
> Today I addressed a topic which bugged me since long time.
Right. No easy subject. For me, too, this has been a problem in MIDAS for a long time.
> Now, on each start of the frontend program, the values from equipment[] are written to
> the ODB. They are still "live". If one changes them when the frontend is
> running, that change takes effect immediately. But on the next restart of the
> frontend, the old values from equipment[] is put back there.
There is a downside from this behaviour.
If some values in equipment/common are "live" and the user is expected to change them,
the user will be unpleasantly surprised when their changes magically disappear (after reboot,
after frontend crash, after run restart if experiment requires restarting some frontends
before starting a new run).
This change will also break some experiments that rely in things like specifying
event buffer names through ODB. But experiments can adapt and specify buffer names
through command line switch instead of ODB.
This new way also it makes the "live" Common/Period unusable. Sure I can speed up or slow
down a frontend even during the run, but if my change does not "stick", what good is it?
Personally, I think there is no easy solution for all these troubles.
I would advocate the following approach:
- think of MIDAS as a "mature" system,
- treasure backward compatibility
- (if we must break backward compatibility to introduce a new "must have" improvement, so be it)
- document how things work. if it is clearly written down what different fields in "common" do, fewer people
"get burned" by unexpected or illogical things. (and any non-trivial system has plenty of those).
Going back to ODB equipment/common, my experience with midas and odb tells me
that one should avoid mixing together ODB entries set by user and ODB entries set by code.
For example, separating them as equipment/settings and equipment/variables works well. Mixing
them as in equipment/common and sequencer/state causes trouble.
So perhaps we should split Equipment/common into two pieces, user settable fields like
"Period" and "event buffer name" would move to equipment/settings or whatever.
This will open the discussion of which items in equipment/common should be user settable,
and some people would want event buffer specified in the code to prevail, while other
people would want the name from odb to prevail, and both are valid but conflicting preferences.
Or we could bite the bullet and say, equipment/common is controlled by the frontend code,
the user should not change it. (and mark it read-only in ODB).
For all the pain this may cause, at least this will make it self-consistent.
Per this proposal, in addition to Stefan's change, the hotlink on equipment/common goes away,
"period" is no longer "live" and the whole subdirectory is made "read-only".
K.O. |
27 Nov 2020, Stefan Ritt, Info, Equipment "common" settings in ODB
|
Ok, so what about the following proposal:
- I change back the mfe.cxx code to behave like before (ODB has precedence and does not get overwritten when the
front-end restarts)
- I add a global flag
BOOL equipment_common_overwrite;
and pre-set it to FALSE;
- So if nothing is changed the flag stays false and ODB keeps precedence
- If a frontend wants to overwrite equipment/common on each start, the user sets
BOOL equipment_common_overwrite = TRUE;
near the equipment[] structure in the front-end code.
- If the flag is true, the mfe.cxx init code copies the equipment[] structure to the ODB on each frontend start
I believe this way we can keep backward compatibility, and add the new way with minimal effort. The only downside
is that all frontends on this plane have to add at least "BOOL equipment_common_overwrite = FALSE;" in their
code.
I know global variables are evil, but this way the user can just add the line above to the equipment[] array, so
one sees this when one edits the equipment[] array, giving motivation to change as needed. So the code would be
BOOL equipment_common_overwrite = TRUE;
EQUIPMENT equipment[] = {
....
}
An alternative way would be to add a function
set_equipment_common_overwrite(TRUE);
into the frontend_init() code. That's somehow cleaner (still needs an internal global variable), but it has to go
into frontend_init() so won't be at the same place as the EQUIPMENT list in the frontend.
Thoughts?
Best,
Stefan |
27 Nov 2020, Konstantin Olchanski, Info, Equipment "common" settings in ODB
|
Yes, I think this will work.
For old mfe.c frontends, global variable set to "do it the new way" should be okey,
new experiments will have it the new way. Old experiments, will be forced to add a one-line definition
of this global variable (otherwise mfe.o will not link), at that time they get to chose "new way" or "old way".
For the new TMFE c++ frontend, this will work naturally when they create the Equipment Common object,
in the object constructor, you can see how it explicitly honors or overwrites the ODB common entries.
The TMFE frontend does not do a live "period", so there should be no issue with that.
Should I open a bitbucket issue "update TMFE frontend to new Equipment/Common scheme", to make sure
I do not forget about it?
K.O. |
30 Nov 2020, Stefan Ritt, Info, Equipment "common" settings in ODB
|
Ok, I implemented it the following way:
- Added a boolean flag "equipment_common_overwrite", which must be contained in EACH frontend, preferably just
before the EQUIPMENT structure, such as:
BOOL equipment_common_overwrite = TRUE;
EQUIPMENT equipment[] = {
...
};
- If that flag is TRUE, then the contents of the "equipment" structure is copied to the ODB on each start of the
front-end
- If the flag is FALSE, then the ODB values are kept on the start of the front-end
The setting of the flag depends now on the philosophy of the experiment. Some experiments say that everything
needed should be in the front-end code, so when it starts everything gets set correctly. They don't change the
values in the ODB, but in the frontend code, which then goes into their repository. Other experiments just need
some default values from the frontend code, and the fine-tune things by changing values in the ODB. These
experiments should set this flag to FALSE.
*****
Please note that EVERY frontend now needs this flag, so all of you have to add it to all of your front-ends,
otherwise the front-end will not compile! I could not figure out how to this could be done without this
requirement, since you can define a global variable only once.
*****
Stefan |
30 Nov 2020, Stefan Ritt, Info, Equipment "common" settings in ODB
|
One more change:
After using the new code for some hours, we realized that the "enabled" flag should not come from the frontend code,
but always be defined by the ODB. So if you quickly have to disable some equipment because the associated hardware is
off, you want to change this flag only in the ODB and not have to recompile the frontend. So we exclude that flag from
being set by the frontend. It is anyhow special, because one sees all disable equipment in the main midas status page,
so one knows what's on and what's off.
Please comment here if you think that change causes problem. Anyhow it's working now for the enabled flag as before
all these changes.
Stefan |
30 Nov 2020, Konstantin Olchanski, Info, Equipment "common" settings in ODB
|
> One more change:
>
> After using the new code for some hours, we realized that the "enabled" flag should not come from the frontend code,
> but always be defined by the ODB. So if you quickly have to disable some equipment because the associated hardware is
> off, you want to change this flag only in the ODB and not have to recompile the frontend. So we exclude that flag from
> being set by the frontend. It is anyhow special, because one sees all disable equipment in the main midas status page,
> so one knows what's on and what's off.
>
> Please comment here if you think that change causes problem. Anyhow it's working now for the enabled flag as before
> all these changes.
>
Good catch. I still think this is fundamentally impossible to "get right". But good, you
are now in the same boat with me. The documentation will read: "if flag is TRUE, these data fields
are read from ODB, if flag is FALSE, those other fields are read from ODB". I will have to check
how this will work out for the TMFE C++ frontend (I think both mfe.c and TMFE frontends should
work "the same").
I think we have at least one month to play with this, I do not think we can do the next release
of midas until January.
K.O. |
06 Nov 2020, Alexandr Kozlinskiy, Suggestion, cmake build fixes
|
hi,
there are several problems with current cmake build files in midas:
- not all systems have cuda libs in /usr/local/cuda
- not all cmake version like when redefining vars
(i.e. redefining ROOT_CXX_FLAGS)
- c++ standard not matching the one used to build ROOT
- ROOTSYS is not needed to find ROOT (it is enough to have root in PATH)
I have posted pull request 'https://bitbucket.org/tmidas/midas/pull-requests/17'
which tries to fix some of the problems.
Tests and comments are welcome. |
27 Nov 2020, Konstantin Olchanski, Suggestion, cmake build fixes
|
Hi, Alexandr, thank you for making improvements to MIDAS. I have some question
about your suggestions:
> there are several problems with current cmake build files in midas:
> - not all systems have cuda libs in /usr/local/cuda
> - not all cmake version like when redefining vars
we do not see these problems with the normal cmake on our current linux systems,
centos-7 and -8, Ubuntu LTS 18.04, 20.04.
so you have something different? can you be a bit more specific,
which version of cmake and which OS you have so see these troubles?
> - c++ standard not matching the one used to build ROOT
> - ROOTSYS is not needed to find ROOT (it is enough to have root in PATH)
Again ROOT tangles with the build of MIDAS.
MIDAS does not use ROOT. As a convenience to the users, we have a "ROOT output" driver
in mlogger and we build a special executable rmlogger with ROOT. Only this special
executable should be linked with ROOT and compiled with ROOT-specific flags.
The rest of the MIDAS build should not be affected by presence or absence of ROOT.
One would have to read old messages on this forum to understand this situation.
>
> I have posted pull request 'https://bitbucket.org/tmidas/midas/pull-requests/17'
> which tries to fix some of the problems.
> Tests and comments are welcome.
>
I look at the diffs:
- CUDA detection is changed to "find_package(CUDA)". This code was added by Joseph and Ben, and there
must be a reason why they did not use find_package(CUDA). They will have to sign-off on this change.
- ROOT related logic assumes that all of MIDAS will be built "the ROOT way". CFLAGS are changed, the C++
standard is changed, etc. this assumption is wrong. only rmlogger and rmana should be built "with ROOT".
If you want to follow through on this, I suggest that you split the pull request into two,
one pull request for the CUDA changes and one pull request for the ROOT changes. Also rework
your ROOT changes as I explained above (but also read all ROOT-related messages on this forum).
K.O. |
05 Nov 2020, Isaac Labrie Boulay, Forum, Building an experiment using CAEN VME interface - unknown type name 'VARIANT_BOOL'
|
Hi everyone,
I have been building an experiment using the v1718 CAEN interface to talk to my modules and I am using the CAENVMElib Linux Library (2.50). I've managed to deal with data type issues by including additional libraries to my driver code but there is one type error that persists:
In file included from /usr/include/CAENVMElib.h:27:0,
from include/v1718.h:25,
from v1718.c:26:
/usr/include/CAENVMEtypes.h:323:9: error: unknown type name ‘VARIANT_BOOL’
CAEN_BOOL cvDS0; /* Data Strobe 0 signal */
The header file used to defined the CAEN types (CAENVMEtypes.h) defines 'CAEN_BOOL' like this:
#ifdef LINUX
#define CAEN_BYTE unsigned char
#define CAEN_BOOL int
#else
#define CAEN_BYTE byte
#define CAEN_BOOL VARIANT_BOOL
#endif
Has anyone ever ran into that problem when setting up an experiment using the CAEN standard?
Thanks for your help.
Isaac |
05 Nov 2020, Pierre-Andre Amaudruz, Forum, Building an experiment using CAEN VME interface - unknown type name 'VARIANT_BOOL'
|
Hi,
You're building under Linux like. You want to define the LINUX and skip the VARIANT_BOOL all together.
PAA
> Hi everyone,
>
> I have been building an experiment using the v1718 CAEN interface to talk to my modules and I am using the CAENVMElib Linux Library (2.50). I've managed to deal with data type issues by including additional libraries to my driver code but there is one type error
that persists:
>
>
> In file included from /usr/include/CAENVMElib.h:27:0,
> from include/v1718.h:25,
> from v1718.c:26:
> /usr/include/CAENVMEtypes.h:323:9: error: unknown type name ‘VARIANT_BOOL’
> CAEN_BOOL cvDS0; /* Data Strobe 0 signal */
>
>
> The header file used to defined the CAEN types (CAENVMEtypes.h) defines 'CAEN_BOOL' like this:
>
>
> #ifdef LINUX
> #define CAEN_BYTE unsigned char
> #define CAEN_BOOL int
> #else
> #define CAEN_BYTE byte
> #define CAEN_BOOL VARIANT_BOOL
> #endif
>
>
> Has anyone ever ran into that problem when setting up an experiment using the CAEN standard?
>
> Thanks for your help.
>
> Isaac |
06 Nov 2020, Isaac Labrie Boulay, Forum, Building an experiment using CAEN VME interface - unknown type name 'VARIANT_BOOL'
|
Yes, you are right. That fixed it and my frontend is compiling.
Thanks Pierre-Andre.
Isaac
> Hi,
>
> You're building under Linux like. You want to define the LINUX and skip the VARIANT_BOOL all together.
> PAA
>
> > Hi everyone,
> >
> > I have been building an experiment using the v1718 CAEN interface to talk to my modules and I am using the CAENVMElib Linux Library (2.50). I've managed to deal with data type issues by including additional libraries to my driver code but there is one type error
> that persists:
> >
> >
> > In file included from /usr/include/CAENVMElib.h:27:0,
> > from include/v1718.h:25,
> > from v1718.c:26:
> > /usr/include/CAENVMEtypes.h:323:9: error: unknown type name ‘VARIANT_BOOL’
> > CAEN_BOOL cvDS0; /* Data Strobe 0 signal */
> >
> >
> > The header file used to defined the CAEN types (CAENVMEtypes.h) defines 'CAEN_BOOL' like this:
> >
> >
> > #ifdef LINUX
> > #define CAEN_BYTE unsigned char
> > #define CAEN_BOOL int
> > #else
> > #define CAEN_BYTE byte
> > #define CAEN_BOOL VARIANT_BOOL
> > #endif
> >
> >
> > Has anyone ever ran into that problem when setting up an experiment using the CAEN standard?
> >
> > Thanks for your help.
> >
> > Isaac |
27 Nov 2020, Konstantin Olchanski, Forum, Building an experiment using CAEN VME interface - unknown type name 'VARIANT_BOOL'
|
>
> The header file used to defined the CAEN types (CAENVMEtypes.h) defines 'CAEN_BOOL' like this:
>
>
> #ifdef LINUX
> #define CAEN_BYTE unsigned char
> #define CAEN_BOOL int
> #else
> #define CAEN_BYTE byte
> #define CAEN_BOOL VARIANT_BOOL
> #endif
>
Complain to CAEN.
The year is 2020 and they should use standard C/C++ data types from stdint.h (uint32_t, etc).
K.O. |
19 Nov 2020, Joseph McKenna, Forum, History plot consuming too much memory
|
A user reported an issue that if they were to plot some history data from
2019 (a range of one day), the plot would spend ~4 minutes loading then
crash the browser tab. This seems to effect chrome (under default settings)
and not firefox
I can reproduce the issue, "Data Being Loaded" shows, then the page and
canvas loads, then all variables get a correct "last data" timestamp, then
the 'Updating data ...' status shows... then the tab crashes (chrome)
It seems that the browser is loading all data until the present day (maybe 4
Gb of data in this case). In chrome the tab then crashes. In firefox, I do
not suffer the same crash, but I can see the single tab is using ~3.5 Gb of
RAM
Tested with midas-2020-08-a up until the HEAD of develop
I could propose the user use firefox, or increase the memory limit in
chrome, however are there plans to limit the data loaded when specifically
plotting between two dates? |
19 Nov 2020, Stefan Ritt, Forum, History plot consuming too much memory
|
The history code is right now programmes in such a way that when you request
an old time window, then all data from that window until the present date
gets loaded. When we implemented that, this worked fine for data ranges of
several years with a delay of just a few seconds. Of course one can only
load that specific window, but when the user then scrolls right, one has to
append new data to the "right side" of the array stored in the browser. If the
user jumps to another location, then the browser has to keep track of which
windows are loaded and which windows not, making the history code much more
complicated. Therefore I'm only willing to spend a few days of solid work
if this really becomes a problem.
Are you sure that the delay comes from the browser or actually from mhttpd
digging through GBytes of history data? I realized that you need solid state
disks to get a real quick response.
Stefan |
20 Nov 2020, Joseph McKenna, Forum, History plot consuming too much memory
|
Poking at the behavior of this, its fairly clear the slow response is from the data
being loaded off an HDD, when we upgrade this system we will allocate enough SSD
storage for the histories.
Using Firefox has resolved this issue for the user's project here
Taking this down a tangent, I have a mild concern that a user could temporarily
flood our gigabit network if we do have faster disks to read the history data. Have
there been any plans or thoughts on limiting the bandwidth users can pull from
mhttpd? I do not see this as a critical item as I can plan the future network
infrastructure at the same time as the next system upgrade (putting critical data
taking traffic on a separate physical network).
> Of course one can only
> load that specific window, but when the user then scrolls right, one has to
> append new data to the "right side" of the array stored in the browser. If the
> user jumps to another location, then the browser has to keep track of which
> windows are loaded and which windows not, making the history code much more
> complicated. Therefore I'm only willing to spend a few days of solid work
> if this really becomes a problem.
For now the user here has retrieved all the data they need, and I can direct others
towards mhist in the near future. Being able to load just a specific window would be
very useful in the future, but I comprehend how it would be a spike in complexity. |
20 Nov 2020, Stefan Ritt, Forum, History plot consuming too much memory
|
> Taking this down a tangent, I have a mild concern that a user could temporarily
> flood our gigabit network if we do have faster disks to read the history data. Have
> there been any plans or thoughts on limiting the bandwidth users can pull from
> mhttpd?
I guess this will not be network limiting but CPU limiting of the mhttpd process. But I'm
not 100% sure, depends on the actual hardware. But even if we improve the history
retrieval to "window only", the user could request all data form 2010 to 2020. So one
would need some code which estimates the amount of data, then tell the user "do you really
want that?". But still, a novice user can simply click "yes" without much of a thought. So
in conclusion I believe proper user training is better than software limits. Like the
other guy "I did 'rm -rf /', and now nothing works any more, can you help?".
Stefan |
27 Nov 2020, Konstantin Olchanski, Forum, History plot consuming too much memory
|
>
> Taking this down a tangent, I have a mild concern that a user could temporarily
> flood our gigabit network if we do have faster disks to read the history data.
>
By my measurements, right now our javascript code can reach 30-50-70% of Gige ethernet
bandwidth, so, no, we cannot flood the network just by making history plots.
(we cannot reach 100% because javascript code is not multithreaded,
it cycles through "request new data" and "decode javascript, make plot" states,
and the network is idle in this second state).
>
> Have there been any plans or thoughts on limiting the bandwidth users can pull from
> mhttpd?
>
10gige networking is here (and 5 and 2.5 Gige, too). I would not worry too much
about saturating 1gige network interfaces.
>
> I do not see this as a critical item as I can plan the future network
> infrastructure at the same time as the next system upgrade (putting critical data
> taking traffic on a separate physical network).
>
10gige network between all computers, everything on SSD ZFS arrays, except
bulk data on ZFS HDD arrays (only for cost reasons $$$/TB).
K.O. |
27 Nov 2020, Konstantin Olchanski, Forum, History plot consuming too much memory
|
>
> Are you sure that the delay comes from the browser or actually from mhttpd
> digging through GBytes of history data?
>
I think we will need to address this question "head-on". The history plot
will need to display the following information:
"time to load data from disk: N seconds, time to transfer data to javascript: M
seconds, time to make the plot: Q seconds".
The second and third items are already available, the first one will need
to be computed in mhttpd and passed to javascript.
K.O. |
27 Nov 2020, Konstantin Olchanski, Forum, History plot consuming too much memory
|
>
> Tested with midas-2020-08-a up until the HEAD of develop
>
Just so you know, it took myself and Stefan quite a bit of effort
to improve memory and data handling in the new history plots
to be able to plot 1 year of data without bogging down too much. I got
to learn the google-chrome javascript cpu profiler, memory profiler
and the intricacies of javascript shift() and unshift() operators.
Before midas-2020-08-a, pressing the zoom-out button you would never
reach the javascript memory limit, the code would go into "100% cpu use"
and the browser tab will become progressively unresponsive well before
running out of memory. With the original code, our alpha-g history plots
could go back a few weeks at most, with the current code, we can go back
about 11 months. Compared to the old "C" history plots that can
do "last 10 years", no problem.
Loading all the history data into the browser is a design choice.
It has benefits and downsides.
The main benefit is that looking at immediate live data is much easier.
The main downside is that "plot last 10 years" becomes impossible.
As they say "appetite comes during eating", we have learned about these
downsides as we developed the new system. When we started, we did not
know much about javascript memory limits, cpu limits, etc. We did learn
a lot, though.
With the current code, we are limited to loading history data up to 50% of
the javascript memory limit. I know how to change the code to get up to 100%,
but I think it is not worth it, it still does not get as to plot "last 10 year".
We think the solution to recovering "last 10 years" capability is to use
binned data (which the history system can already deliver to javascript).
With binned data, the data volume in Mbytes remains constant, javascript
memory use has an upper-bound (we never use more memory than X Mbytes)
and data movement over the network is reduced.
Another way to look at this - typical display has only 1000-4000 vertical pixels,
it cannot physically display a bigger number of data points (no more
then 1 data point per pixel). So why load 1000000 data points when we only
can plot 1000-4000 of them?
So all the infrastructure for plotting binned data is already there,
but the javascript code still needs to be written. I think the biggest
challenge will be in blending or combining binned and unbinned data
on the same plot or in seamlessly switching the plot between binned and
unbinned data.
K.O. |
27 Nov 2020, Konstantin Olchanski, Forum, History plot consuming too much memory
|
>
> With the current code, we are limited to loading history data up to 50% of
> the javascript memory limit.
>
The javascript memory limit itself seems to be a moving target. (google javascript
memory limit, and good luck!).
Historically, javascript did not have any memory or cpu use limits, but with
the raise of abusive web sites, bitcoin miners, etc, I see browsers clamp down
on allowed/allocated CPU use (inactive tabs are throttled down). memory use
is already clamped down severely, on a 64 GB computer, a browser tab
can only allocate a handful of GBs.
This throttling of browser tabs is already intrusive enough that we need
to be careful in programming midas web pages. for examples throttled events
are not firing at the same rate or in the same order as in active tabs.
One logical conclusion of these restrictions could be that, eventually,
google-chrome permits only just enough cpu and memory to run gmail.
K.O. |
13 Oct 2020, Soichiro Kuribayashi, Info, About remote control of front end part of MIDAS on chip
|
Hello!
My name is Soichiro Kuribayashi and I am a Ph.D. student at Kyoto University.
I'm a T2K collaborator and working for Super FGD which is new detector in ND280.
I'm a beginner of MIDAS and I've just started to develop the DAQ software with
MIDAS for Super FGD.
For the DAQ of Super FGD, we will run remotely front end part of MIDAS on ZYNQ
which is system on chip.
For this remote control of front end part with mserver, we have to mount home
directory of DAQ PC(Cent OS8) on that of Linux on ZYNQ.
So I wonder if we should use NFS(Network file system) + NIS(Network information
service) + autofs for the mounting. Is it correct?
If you have any information or any suggestion for the remote control on chip,
please let me know.
Best regards,
Soichiro |
13 Oct 2020, Konstantin Olchanski, Info, About remote control of front end part of MIDAS on chip
|
> My name is Soichiro Kuribayashi and I am a Ph.D. student at Kyoto University.
> I'm a T2K collaborator and working for Super FGD which is new detector in ND280.
Hi! I did much of the DAQ software for the original FGD. I hope I can help.
> For the DAQ of Super FGD, we will run remotely front end part of MIDAS on ZYNQ
> which is system on chip.
This would be the same as the existing FGD. Inside the FGD DCC is a Virtex4 FPGA
with a 300MHz PPC CPU running Linux from a CompactFlash card (Kentaro-san did this
part). On this linux system runs the FGD DCC midas frontend. It connects
to the FGD midas instance using the mserver. This frontend executable is
copied to the DCC using "scp", there is no common nfs mounted home directory.
> For this remote control of front end part with mserver, we have to mount home
> directory of DAQ PC(Cent OS8) on that of Linux on ZYNQ.
> So I wonder if we should use NFS(Network file system) + NIS(Network information
> service) + autofs for the mounting. Is it correct?
Since you have a bigger SOC and you can run pretty much a complete linux,
I do recommend that you go this route. During development it is very convenient
to have common home directories on the main machine and on the frontend fpga
machines.
But this is not necessary. the midas mserver connection does not require
common (nfs-mounted) home directory, you can copy the files to the frontend
fpga using scp and rsync and you can use the gdb "remote debugger" function.
I can also suggest that on your frontend SOC/FPGA machine, you boot linux
using the "nfs-root" method. This way, the local flash memory only
contains a boot loader (and maybe the linux kernel image, depending on
bootloader limitations). The rest of the linux rootfs can be on your
central development machine. This way management of flash cards,
confusion with different contents of local flash and need to make backups
of frontend machines is much reduced.
If you use a fast SSD and ZFS with deduplication, you will also have good
performance gain (NFS over 1gige network to server with fast SSD works
so much better compared to the very slow SD/MMC/NAND flash).
I can point you to some of my documentation how we do this.
>
> If you have any information or any suggestion for the remote control on chip,
> please let me know.
>
I would say you are on a good track. For early development on just one board,
pretty much any way you do it will work, but once you start scaling up
beyound 3-4-5 frontends, you will start seeing benefits from common NFS-mounted
home directories, NFS-root booted linux, etc.
And of course you may want to study the existing ND280/FGD DAQ. I hope you
have access to the running system at Jparc. If not, I have a copy of
pretty much everything (except for running hardware, it is stored in the basement,
dead) and I can give you access.
P.S. This reminds me that the cascade software from ND280 (they key part
for connecting the FGD, the TPC, the slow controls & etc into one experiment)
was never merged into the midas repository. I opened a ticket for this,
now we will not forget again:
https://bitbucket.org/tmidas/midas/issues/291/import-cascase-frontend-from-t2k-
nd280-fgd
K.O. |
13 Oct 2020, Soichiro Kuribayashi, Info, About remote control of front end part of MIDAS on chip
|
Dear Konstantin,
Thank you very much for your reply and detailed information.
I would appreciate if you could help us.
> I can also suggest that on your frontend SOC/FPGA machine, you boot linux
> using the "nfs-root" method. This way, the local flash memory only
> contains a boot loader (and maybe the linux kernel image, depending on
> bootloader limitations). The rest of the linux rootfs can be on your
> central development machine. This way management of flash cards,
> confusion with different contents of local flash and need to make backups
> of frontend machines is much reduced.
As you said, we can run complete Linux (Ubuntu 16) on ZYNQ and I'm using common NFS
system now. However, I didn't know "nfs-root" method which you mentioned and this method
seems to be reasonable way to just share linux rootfs.
First of all, I will try this method for simpler system.
> If you use a fast SSD and ZFS with deduplication, you will also have good
> performance gain (NFS over 1gige network to server with fast SSD works
> so much better compared to the very slow SD/MMC/NAND flash).
>
> I can point you to some of my documentation how we do this.
I'm concerned about such performance and I have checked the performance with common NFS
over gige network and my DAQ PC roughly(data transfer rate ~ O(10) MByte/sec). However, I
didn't know the ZFS and also how we can have performance gain with a fast SSD and ZFS.
Please let me know your documentation how to do it if possible.
> I would say you are on a good track. For early development on just one board,
> pretty much any way you do it will work, but once you start scaling up
> beyound 3-4-5 frontends, you will start seeing benefits from common NFS-mounted
> home directories, NFS-root booted linux, etc.
I'm developing with just one board and common NFS-mounted now. I'm looking forward to
seeing such benefits when I will use multiple frontends.
> And of course you may want to study the existing ND280/FGD DAQ. I hope you
> have access to the running system at Jparc. If not, I have a copy of
> pretty much everything (except for running hardware, it is stored in the basement,
> dead) and I can give you access.
I don't have access to the system at Jparc, but Nick has told us where FGD DAQ code is.
Is bellow URL everything of code of FGD DAQ?
https://git.t2k.org/hastings/fgddaq/-/tree/master
Best regards,
Soichiro |
20 Oct 2020, Stefan Ritt, Info, About remote control of front end part of MIDAS on chip
|
We also use a Zynq chip and boot in the following order:
1. SD Card
a. First Stage Bootloader
b. PL Firmware
c. UBOOT
2. NFS over Ethernet
a. Linux kernel
b. RootFS
c. Mounting home directories
If you need details I can bring you in contact with the person who actually implemented that.
Best,
Stefan |
21 Oct 2020, Soichiro Kuribayashi, Info, About remote control of front end part of MIDAS on chip
|
Dear Stefan,
Thank you very much for your help.
I have already contacted someone who has used ZYNQ in that order and It's working fine for now.
But, I'll let you know if something goes wrong.
Best regards,
Soichiro |
29 Sep 2020, Amy Roberts, Forum, using python client to start and stop run
|
I'm using a python client to start and stop runs, and the following code *appears*
to set the MIDAS state to "Run"
client.odb_set("/Runinfo/State", 3)
However, it doesn't seem to do other things associated with a run, like start
accumulating events.
Is there a different way I should start the run from the python client?
Thanks! |
29 Sep 2020, Ben Smith, Forum, using python client to start and stop run
|
The ODB variable "/Runinfo/State" is a symptom of starting/stopping a run, rather than the cause.
In C++, one uses `cm_transition()` to start/stop runs.
In python code you can use the `start_run()` and `stop_run()` functions from `midas.client`: https://bitbucket.org/tmidas/midas/src/00ff089a836100186e9b26b9ca92623e672f0030/python/midas/client.py#lines-793:808 |
06 Oct 2020, Konstantin Olchanski, Forum, using python client to start and stop run
|
> The ODB variable "/Runinfo/State" is a symptom of starting/stopping a run, rather than the cause.
>
> In C++, one uses `cm_transition()` to start/stop runs.
>
> In python code you can use the `start_run()` and `stop_run()` functions from `midas.client`: https://bitbucket.org/tmidas/midas/src/00ff089a836100186e9b26b9ca92623e672f0030/python/midas/client.py#lines-793:808
one can also run an external command: "mtransition START" and "mtransition STOP"
K.O. |
02 Sep 2020, Ruslan Podviianiuk, Forum, Transition status message
|
Hello,
I got an error after start of run and it would be good to show this error (or
errors) in UI that I am developing. I see this error in the Transition
directory (please see the attached file). Is it possible to read the status
message and error messages from the Transition directory using jsonrpc? If yes,
could you please explain me how to do this.
Thank you.
Ruslan |
02 Sep 2020, Ben Smith, Forum, Transition status message
|
The information you want is in the ODB:
* "/System/Transition/status" is the overall integer status code.
* "/System/Transition/error" is the overall error message string.
There is also per-client status information in the ODB:
* "/System/Transition/Clients/<client_name>/status"
* "/System/Transition/Clients/<client_name>/error" |
02 Sep 2020, Ruslan Podviianiuk, Forum, Transition status message
|
> The information you want is in the ODB:
> * "/System/Transition/status" is the overall integer status code.
> * "/System/Transition/error" is the overall error message string.
>
> There is also per-client status information in the ODB:
> * "/System/Transition/Clients/<client_name>/status"
> * "/System/Transition/Clients/<client_name>/error"
Thank you so much, Ben! |
08 Sep 2020, Konstantin Olchanski, Forum, Transition status message
|
> > The information you want is in the ODB:
> > * "/System/Transition/status" is the overall integer status code.
> > * "/System/Transition/error" is the overall error message string.
> >
> > There is also per-client status information in the ODB:
> > * "/System/Transition/Clients/<client_name>/status"
> > * "/System/Transition/Clients/<client_name>/error"
You can also use web page .../resources/transition.html as an example of how
to read transition (and other) data from ODB into your own web page. example.html
may also be helpful.
K.O. |
08 Sep 2020, Ruslan Podviianiuk, Forum, Transition status message
|
> > > The information you want is in the ODB:
> > > * "/System/Transition/status" is the overall integer status code.
> > > * "/System/Transition/error" is the overall error message string.
> > >
> > > There is also per-client status information in the ODB:
> > > * "/System/Transition/Clients/<client_name>/status"
> > > * "/System/Transition/Clients/<client_name>/error"
>
> You can also use web page .../resources/transition.html as an example of how
> to read transition (and other) data from ODB into your own web page. example.html
> may also be helpful.
>
> K.O.
Thank you Konstantin!
Ruslan |
08 Sep 2020, Zaher Salman, Forum, json parser error
|
I am getting the following error alert in a custom page whenever a run starts
json parser exception: SyntaxError: Unexpected token < in JSON at position 985, batch request: method: "db_get_values", params: [object Object], id: 1598691925697 method: "get_alarms", params: null, id: 1598691925697 method: "cm_msg_retrieve", params: [object Object], id: 1598691925697 method: "cm_msg_retrieve", params: [object Object], id: 1598691925697
Does anyone know why and what causes this? This does not affect anything and things seem to continue running fine.
thanks. |
08 Sep 2020, Konstantin Olchanski, Forum, json parser error
|
> I am getting the following error alert in a custom page whenever a run starts
> json parser exception: SyntaxError: Unexpected token < in JSON at position 985, batch request: method: "db_get_values", params: [object Object], id: 1598691925697 method: "get_alarms", params: null, id: 1598691925697 method: "cm_msg_retrieve", params: [object Object], id: 1598691925697 method: "cm_msg_retrieve", params: [object Object], id: 1598691925697
> Does anyone know why and what causes this? This does not affect anything and things seem to continue running fine.
this is bug #242, https://bitbucket.org/tmidas/midas/issues/242/mjsonrpc-calls-should-return-valid-utf8
we read stuff from midas.log and push it to the web browser. we have seen this stuff
contain arbitrary binary data (both intentionally written into midas.log by cm_msg() and
file content corruption/truncation from computer crashes), the json decoder in the web browser
does not like that stuff - it is invalid utf-8 unicode - and throws an exception.
since we cannot ensure content of midas.log (and other files on disk) are always valid utf-8,
we have to sanitize it before sending it to the browser.
right now I am not sure of the best way to do this sanitizing. we do have a function to check
for valid utf-8 unicode, perhaps it should be extended to replace invalid unicode with spaces
or Xes or "?" or whatever, I am open to suggestions and ideas.
BTW, this is a new recent change to how strings generally work. C NUL-terminated strings are
permitted to contain arbitrary binary data (except for NUL char, of course). C++ std::string
are permitted to contain arbitrary binary data. but javascript strings are only permitted
to contain valid unicode, and the json standard was recently amended to require that json
strings are valid utf-8 unicode. So there is a disconnect between C/C++ code written in the
last 50 years where strings can contain binary data and the javascript world requiring
valid utf-8 unicode pretty much everywhere.
K.O. |
21 Aug 2020, Ruslan Podviianiuk, Forum, time information
|
Hello,
I have a few questions about time information:
1. Is it possible to get "Running time" using, for example, jsonrpc? (please see
the attached file)
2. Is it possible to configure "Start time" and "Stop time" with time zone? For
example when I start a new run, value of "Start time" key is automatically changed
to "Fri Aug 21 12:38:36 2020" without time zone.
Thank you. |
24 Aug 2020, Stefan Ritt, Forum, time information
|
> 1. Is it possible to get "Running time" using, for example, jsonrpc? (please see
> the attached file)
You have in the ODB "/Runinfo/Start time binary" which is measured in seconds since
1970. By subtracting this from the current time, you get the running time.
> 2. Is it possible to configure "Start time" and "Stop time" with time zone? For
> example when I start a new run, value of "Start time" key is automatically changed
> to "Fri Aug 21 12:38:36 2020" without time zone.
"Start time binary" and "Stop time binary" are in seconds since the 1970 in UTC, so no
time zone involved there. The ASCII versions of the start/stop time are derived from
the binary time using the server's local time zone. If you want to display them in a
different time zone, you have to create a custom page and convert it to another time
zone using JavaScript like
var d = new Date(start_time_binary);
Stefan |
25 Aug 2020, Ruslan Podviianiuk, Forum, time information
|
Thank you, Stefan
Ruslan
> > 1. Is it possible to get "Running time" using, for example, jsonrpc? (please see
> > the attached file)
>
> You have in the ODB "/Runinfo/Start time binary" which is measured in seconds since
> 1970. By subtracting this from the current time, you get the running time.
>
> > 2. Is it possible to configure "Start time" and "Stop time" with time zone? For
> > example when I start a new run, value of "Start time" key is automatically changed
> > to "Fri Aug 21 12:38:36 2020" without time zone.
>
> "Start time binary" and "Stop time binary" are in seconds since the 1970 in UTC, so no
> time zone involved there. The ASCII versions of the start/stop time are derived from
> the binary time using the server's local time zone. If you want to display them in a
> different time zone, you have to create a custom page and convert it to another time
> zone using JavaScript like
>
> var d = new Date(start_time_binary);
>
> Stefan |
24 Aug 2020, Konstantin Olchanski, Release, midas-2020-12
|
midas-2020-12-a is here.
new features and notable updates since midas-2020-03:
- new C++ ODB interface odbxx.h
- image history
- much improved history plots
- new sequencer pages
- UTF-8 clean ODB (complains if any TID_STRING is invalid UTF-8)
- mhttpd update to mongoose 6.16 with much improved mulththreading
- mhttpd update to use MBEDTLS in preference to problematic OpenSSL
- MidasConfig.cmake contributed by Mathieu Guigue
plans for next development: major update of mlogger to simplify channel
configuration in odb, improvements to mhttpd multithreading, new history plot
configuration page, more c++ification.
To obtain this release, either checkout the top of branch release/midas-2020-08
(recommended) or checkout the tag midas-2020-08-a.
K.O. |
28 Aug 2019, Nick Hastings, Forum, History plot problems for frontend with multiple indicies 
|
Hello experts,
I have been writing a SC frontend for a powersupply. I have used the model
where the frontend can be started with "-i n" option so that each fe can
control a different supply. During the development/testing of the program I
would normally only run a single instance with "-i 1". However when I started
a second instance with "-i 2" I found problems with the history plots that
were being made for the original "-i 1" instance. The variable being plotted
seemed to randomly jump between the value from the "-i 1" instance and
the "-i 2" instance. confirmed that the "correct" values exist for each
frontend in the odb under /Equipment/Foo01/Variables and
/Equipment/Foo02/Variables
This is also not just a plotting artifact since I was also
able to see the two different values by running mhist.
I saw this behaviour using midas-2019-03 and also the head of the development
branch (686e4de2b55023b0d1936c60bcf4767c5e6caac0 from just under 48 hours ago).
I was able to reproduce this with a stripped down frontend that just
sets a variable that is equal to its frontend_index. Please find the code
and Makefile attached. Presumably I've done something wrong in my
implementation that hopefully a more experienced person can spot quite
quickly, but please let me know if any more information is needed.
I have seen this behaviour on both Debian 10 and on a CentOS 7 Singularity
image running on top of Debian 10.
Thanks,
Nick.
P.S. I made the topic of this post "Forum" and not "Bug Report" since I
expect the root of this problem is somewhere between the keyboard and chair. |
28 Aug 2019, Stefan Ritt, Forum, History plot problems for frontend with multiple indicies
|
My first question would be why are you using several font-ends at all? That makes things more
complicated than needed. In the normal FE framework, you can define either several equipment
served by one frontend, or even one equipment linked to several devices. In the MEG experiment
we have one slow control frontend controlling ~100 devices without problem. In the old days there
was a problem that some slow devices could throttle the readout, but since the invention of multi-
threaded slow control equipment, each device gets its own thread so they don't block each other.
Stefan |
28 Aug 2019, Nick Hastings, Forum, History plot problems for frontend with multiple indicies
|
Hi Stefan,
thanks for you quick reply.
> My first question would be why are you using several font-ends at all?
Becuase I was following the model used for many of the frontends for the ND280 FGD.
> That makes things more
> complicated than needed. In the normal FE framework, you can define either several equipment
> served by one frontend, or even one equipment linked to several devices. In the MEG experiment
> we have one slow control frontend controlling ~100 devices without problem. In the old days
there
> was a problem that some slow devices could throttle the readout, but since the invention of
multi-
> threaded slow control equipment, each device gets its own thread so they don't block each
other.
Perhaps things have changed in the 10 years since the FGD SC code was written. I can do it
differently but doing it that way seemed naturual since around 90% of the frontend code that I
have see does it that way.
Nick. |
28 Aug 2019, lcp, Forum, History plot problems for frontend with multiple indicies
|
hi,
> > That makes things more
> > complicated than needed. In the normal FE framework, you can define either several equipment
> > served by one frontend, or even one equipment linked to several devices. In the MEG experiment
> > we have one slow control frontend controlling ~100 devices without problem. In the old days
> there
> > was a problem that some slow devices could throttle the readout, but since the invention of
> multi-
> > threaded slow control equipment, each device gets its own thread so they don't block each
> other.
>
I agree with Stefan, that it's probably better to run a multi-threaded setup, than individual frontends.
The only place I've ever used the frontend index on startup is when I was testing and building
an eventbuilder.
https://midas.triumf.ca/MidasWiki/index.php/Event_Builder#Example
This might explain, why your history is swapping between frontends, as in the event builder, it gets
reconstructed.
Maybe this helps...
LCP
> Perhaps things have changed in the 10 years since the FGD SC code was written. I can do it
> differently but doing it that way seemed naturual since around 90% of the frontend code that I
> have see does it that way. |
16 Sep 2019, Konstantin Olchanski, Forum, History plot problems for frontend with multiple indicies
|
> it's probably better to run a multi-threaded setup, than individual frontends.
I recommend against using multiple threads if at all possible and unless absolutely required.
Only for one reason: multithreaded c++ programs are notoriously hard to debug.
In addition, one has to face several classes of bugs absent in single-threaded applications:
a) which thread "owns" which object
b) locking of all shared data
c) huge overheads from locking at high data rates (a performance bug)
d) correct locking order, dead locks, live locks
e) incomprehensible core dumps and stack traces
f) race conditions
To control 2 power supplies, run 2 frontend programs, 1 per power supply.
To control 64 frontend cards, run 1 frontend with many threads: 64 (per device) + 1 (main thread) + 1 (RPC handler) + 1
(watchdog) + 1 (common event generator/data transmitter) + 1 (odb/web page status update). You *will* bump into each
and every one of the problems (a) to (f) above.
K.O. |
16 Sep 2019, Konstantin Olchanski, Forum, History plot problems for frontend with multiple indicies
|
> My first question would be why are you using several font-ends at all? That makes things more
> complicated than needed. In the normal FE framework, you can define either several equipment
> served by one frontend, or even one equipment linked to several devices.
I am the culprit here, as I wrote the original code for T2K/ND280 that Nick is looking at.
At the time, we needed to control multiple units of identical equipment. Most of these equipments
needed to be controlled independently from each other, so we could not/did not want to use
one single frontend executable to control all of them at the same time. For example, for equipment
not in use, we can stop the corresponding frontend. In case of trouble, we can restart
the corresponding frontend without disrupting the frontends for the other equipments.
The successful operation of the T2K/ND280 experiment is sufficient defence for the validity of this approach.
One lesson learned was that the MIDAS frontend framework did not make it easy to have multiple identical frontends
for controlling multiple identical equipments. (typical use is control of 2-3 Wiener power supplies, 1-2-3 UPS
devices, etc). At the time (and today), only the "i NNN" flag is available to tell the frontend "who am I?". To make it
work, one has to use the hard to "%02d" stuff in the equipment name, and there are other complications. For my
"next generation" of frontends, I tried to specialize the frontend executables at compile time using C/C++
preprocessor defines (-Dwiener01, -Dwiener02, etc), this worked better, but still not super happy.
My current solution as implemented by the tmfe frontend framework is to give the user full control
over the command line arguments (mfe.c did not permit any "user arguments" and did not allow
access to argc/argv) and full control over the equipment names (mfe.c equipment names are fixed at compile time).
K.O. |
29 Aug 2019, Ben Smith, Forum, History plot problems for frontend with multiple indicies
|
Hi Nick,
I confirm that this issue appears when using the MIDAS history driver. The issue does not appear when using the MYSQL history driver.
One solution is to give each frontend instance a different Event ID (see example code below for doing this in frontend_init). The history system did still seem to be confused by the existing FeDummy equipments/events even after making this change. However, after changing EQ_NAME from FeDummy to FeDum (i.e. starting from a clean state history-wise) things behaved normally.
I will note that some experiments definitely have a need for the "-i" method, especially those that run on distributed clusters.
Ben
```
INT frontend_init()
{
sprintf(eq_name, "%s%02d", EQ_NAME, get_frontend_index());
// Ensure each FE gets a different Event ID in the history system (951, 952 etc)
char keyname[128];
HNDLE hkey;
int status;
sprintf(keyname, "/Equipment/%s/Common/Event ID", eq_name);
status = db_find_key(hDB, 0, keyname, &hkey);
if (status != DB_SUCCESS) abort();
WORD new_ev_id = 950 + get_frontend_index();
status = db_set_data_index(hDB, hkey, &new_ev_id, 2, 0, TID_WORD);
if (status != DB_SUCCESS) abort();
return SUCCESS;
}
``` |
01 Sep 2019, Nick Hastings, Forum, History plot problems for frontend with multiple indicies
|
Hi Ben,
thanks for your reply. I can confirm that your suggested workaround does indeed
make the problem dissapear.
I guess this issue hasn't been seen at T2K since we use MYSQL for the history.
Thanks,
Nick. |
16 Sep 2019, Konstantin Olchanski, Forum, History plot problems for frontend with multiple indicies
|
> thanks for your reply. I can confirm that your suggested workaround does indeed
> make the problem dissapear.
> I guess this issue hasn't been seen at T2K since we use MYSQL for the history.
I think you found the source of the problem, confused event id assignments. To confirm,
can you email me (or post here) the output of odbedit "ls -l /History/Events".
If that's the problem, you can avoid it completely by switching to a history storage method
that does not rely on magic mapping between equipment names and numeric event id's:
try the "FILE" method (set odb /Logger/History/FILE/Active to "y", restart the logger) or
the "MYSQL" method (you will need to setup a mysql database). You tell mhttpd and mhist which
history data to read by setting ODB /History/LoggerHistoryChannel to one of the channel names
from /logger/history/, restart mhttpd. (mhttpd and mhist used to print a message "reading history
data from channel XXX", but somebody removed this message).
K.O. |
16 Sep 2019, Nick Hastings, Forum, History plot problems for frontend with multiple indicies
|
Hi Konstantin,
thanks for your reply.
> > thanks for your reply. I can confirm that your suggested workaround does indeed
> > make the problem dissapear.
> > I guess this issue hasn't been seen at T2K since we use MYSQL for the history.
>
> I think you found the source of the problem, confused event id assignments. To confirm,
> can you email me (or post here) the output of odbedit "ls -l /History/Events".
Sorry, do you want this for after I've applied the fix suggested by Ben or with the original code
that I posted.
With the original code it only shows one fe even though both are running:
[local:e666:S]History>ls -l /History/Events
Key name Type #Val Size Last Opn Mode Value
---------------------------------------------------------------------------
1 STRING 1 10 2m 0 RWD FeDummy02
0 STRING 1 16 2m 0 RWD Run transitions
[local:e666:S]History> scl
Name Host
mhttpd localhost
fedummy01 localhost
fedummy02 localhost
ODBEdit localhost
Logger localhost
[local:e666:S]History>ls "/History/Display/Default/Dummy/
Timescale 1h
Zero ylow n
Show run markers y
Show values y
Sort Vars n
Log axis n
Minimum 0
Maximum 0
Variables
FeDummy01:Data
FeDummy02:Data
Label
Colour
#00AAFF
#FF9000
Factor
0
0
Offset
0
0
Buttons
10m
1h
3h
12h
24h
3d
7d
Formula
Show old vars n
> If that's the problem, you can avoid it completely by switching to a history storage method
> that does not rely on magic mapping between equipment names and numeric event id's:
> try the "FILE" method (set odb /Logger/History/FILE/Active to "y", restart the logger) or
> the "MYSQL" method (you will need to setup a mysql database). You tell mhttpd and mhist which
> history data to read by setting ODB /History/LoggerHistoryChannel to one of the channel names
> from /logger/history/, restart mhttpd. (mhttpd and mhist used to print a message "reading history
> data from channel XXX", but somebody removed this message).
Using the orginal code I posted and switching from MIDAS history to FILE history did not seem to
change the random behaviour in the history plots.
Regards,
Nick. |
17 Sep 2019, Konstantin Olchanski, Forum, History plot problems for frontend with multiple indicies
|
> [local:e666:S]History>ls -l /History/Events
> Key name Type #Val Size Last Opn Mode Value
> ---------------------------------------------------------------------------
> 1 STRING 1 10 2m 0 RWD FeDummy02
> 0 STRING 1 16 2m 0 RWD Run transitions
Something is very broken. There should be more entries here, at least
there should be entries for "FeDummy01" and usually there is also an entry
for "FeDummy" because one invariably runs fedummy without "-i" at least once.
The fact that changing from "midas" storage to "file" storage makes no difference
also indicates that something is very broken.
I want to debug this.
Since you tried the "file" storage, can you send me the output of "ls -l mhf*.dat" in the directory
with the history files? (it should have the "*.hst" files from the "midas" storage and "mhf*.dat" files
from the "file" storage.
K.O. |
18 Sep 2019, Nick Hastings, Forum, History plot problems for frontend with multiple indicies
|
Hi Konstantin,
> > [local:e666:S]History>ls -l /History/Events
> > Key name Type #Val Size Last Opn Mode Value
> > ---------------------------------------------------------------------------
> > 1 STRING 1 10 2m 0 RWD FeDummy02
> > 0 STRING 1 16 2m 0 RWD Run transitions
>
> Something is very broken. There should be more entries here, at least
> there should be entries for "FeDummy01" and usually there is also an entry
> for "FeDummy" because one invariably runs fedummy without "-i" at least once.
This is a fresh experiment that I started just to test this this issue, that is why there are not many
entries in /History/Events. I agree though that we should expect to see a FeDummy01 entry.
> The fact that changing from "midas" storage to "file" storage makes no difference
> also indicates that something is very broken.
>
> I want to debug this.
>
> Since you tried the "file" storage, can you send me the output of "ls -l mhf*.dat" in the directory
> with the history files? (it should have the "*.hst" files from the "midas" storage and "mhf*.dat"
files
> from the "file" storage.
When I started this experiment yesterday(?) I disabled the Midas history when I enbled the file
history. Jsut now I reenabled the Midas history, so they are currently both active.
% ls -l work/online/{*.hst,mhf*.dat}
-rw-r--r-- 1 hastings hastings 14996 Sep 17 10:21 work/online/190917.hst
-rw-r--r-- 1 hastings hastings 3292 Sep 18 16:29 work/online/190918.hst
-rw-r--r-- 1 hastings hastings 867288 Sep 18 16:29 work/online/mhf_1568683062_20190917_fedummy01.dat
-rw-r--r-- 1 hastings hastings 867288 Sep 18 16:29 work/online/mhf_1568683062_20190917_fedummy02.dat
-rw-r--r-- 1 hastings hastings 166 Sep 17 10:17
work/online/mhf_1568683062_20190917_run_transitions.dat
And again, just as a sanity check:
% odbedit -c 'ls -l /History/Events'
Key name Type #Val Size Last Opn Mode Value
---------------------------------------------------------------------------
1 STRING 1 10 1m 0 RWD FeDummy02
0 STRING 1 16 1m 0 RWD Run transitions
Regards,
Nick. |
27 Sep 2019, Konstantin Olchanski, Forum, History plot problems for frontend with multiple indicies
|
We should fix this for midas-2019-10.
https://bitbucket.org/tmidas/midas/issues/193/confusion-in-history-event-ids
K.O.
> Hi Konstantin,
>
> > > [local:e666:S]History>ls -l /History/Events
> > > Key name Type #Val Size Last Opn Mode Value
> > > ---------------------------------------------------------------------------
> > > 1 STRING 1 10 2m 0 RWD FeDummy02
> > > 0 STRING 1 16 2m 0 RWD Run transitions
> >
> > Something is very broken. There should be more entries here, at least
> > there should be entries for "FeDummy01" and usually there is also an entry
> > for "FeDummy" because one invariably runs fedummy without "-i" at least once.
>
> This is a fresh experiment that I started just to test this this issue, that is why there are not many
> entries in /History/Events. I agree though that we should expect to see a FeDummy01 entry.
>
> > The fact that changing from "midas" storage to "file" storage makes no difference
> > also indicates that something is very broken.
> >
> > I want to debug this.
> >
> > Since you tried the "file" storage, can you send me the output of "ls -l mhf*.dat" in the directory
> > with the history files? (it should have the "*.hst" files from the "midas" storage and "mhf*.dat"
> files
> > from the "file" storage.
>
> When I started this experiment yesterday(?) I disabled the Midas history when I enbled the file
> history. Jsut now I reenabled the Midas history, so they are currently both active.
>
> % ls -l work/online/{*.hst,mhf*.dat}
> -rw-r--r-- 1 hastings hastings 14996 Sep 17 10:21 work/online/190917.hst
> -rw-r--r-- 1 hastings hastings 3292 Sep 18 16:29 work/online/190918.hst
> -rw-r--r-- 1 hastings hastings 867288 Sep 18 16:29 work/online/mhf_1568683062_20190917_fedummy01.dat
> -rw-r--r-- 1 hastings hastings 867288 Sep 18 16:29 work/online/mhf_1568683062_20190917_fedummy02.dat
> -rw-r--r-- 1 hastings hastings 166 Sep 17 10:17
> work/online/mhf_1568683062_20190917_run_transitions.dat
>
> And again, just as a sanity check:
>
> % odbedit -c 'ls -l /History/Events'
> Key name Type #Val Size Last Opn Mode Value
> ---------------------------------------------------------------------------
> 1 STRING 1 10 1m 0 RWD FeDummy02
> 0 STRING 1 16 1m 0 RWD Run transitions
>
> Regards,
>
> Nick. |
24 Aug 2020, Konstantin Olchanski, Forum, History plot problems for frontend with multiple indicies
|
This turned out to be a tricky problem. I am adding a warning about it in mlogger. This should go into midas-
2020-07. Closing bug #193. K.O.
> We should fix this for midas-2019-10.
>
> https://bitbucket.org/tmidas/midas/issues/193/confusion-in-history-event-ids
>
> K.O.
>
>
>
>
>
> > Hi Konstantin,
> >
> > > > [local:e666:S]History>ls -l /History/Events
> > > > Key name Type #Val Size Last Opn Mode Value
> > > > ---------------------------------------------------------------------------
> > > > 1 STRING 1 10 2m 0 RWD FeDummy02
> > > > 0 STRING 1 16 2m 0 RWD Run transitions
> > >
> > > Something is very broken. There should be more entries here, at least
> > > there should be entries for "FeDummy01" and usually there is also an entry
> > > for "FeDummy" because one invariably runs fedummy without "-i" at least once.
> >
> > This is a fresh experiment that I started just to test this this issue, that is why there are not many
> > entries in /History/Events. I agree though that we should expect to see a FeDummy01 entry.
> >
> > > The fact that changing from "midas" storage to "file" storage makes no difference
> > > also indicates that something is very broken.
> > >
> > > I want to debug this.
> > >
> > > Since you tried the "file" storage, can you send me the output of "ls -l mhf*.dat" in the directory
> > > with the history files? (it should have the "*.hst" files from the "midas" storage and "mhf*.dat"
> > files
> > > from the "file" storage.
> >
> > When I started this experiment yesterday(?) I disabled the Midas history when I enbled the file
> > history. Jsut now I reenabled the Midas history, so they are currently both active.
> >
> > % ls -l work/online/{*.hst,mhf*.dat}
> > -rw-r--r-- 1 hastings hastings 14996 Sep 17 10:21 work/online/190917.hst
> > -rw-r--r-- 1 hastings hastings 3292 Sep 18 16:29 work/online/190918.hst
> > -rw-r--r-- 1 hastings hastings 867288 Sep 18 16:29 work/online/mhf_1568683062_20190917_fedummy01.dat
> > -rw-r--r-- 1 hastings hastings 867288 Sep 18 16:29 work/online/mhf_1568683062_20190917_fedummy02.dat
> > -rw-r--r-- 1 hastings hastings 166 Sep 17 10:17
> > work/online/mhf_1568683062_20190917_run_transitions.dat
> >
> > And again, just as a sanity check:
> >
> > % odbedit -c 'ls -l /History/Events'
> > Key name Type #Val Size Last Opn Mode Value
> > ---------------------------------------------------------------------------
> > 1 STRING 1 10 1m 0 RWD FeDummy02
> > 0 STRING 1 16 1m 0 RWD Run transitions
> >
> > Regards,
> >
> > Nick. |
12 Aug 2020, Yan Liu, Suggestion, adding db_get_mode ti check access mode for keys
|
Hello,
I am wondering if there is a function that checks the access mode for a key? I
found the db_set_mode() function that allows me to set the access mode for a key,
but failed to find its counterpart get function.
Thanks in advance,
Yan |
13 Aug 2020, Stefan Ritt, Suggestion, adding db_get_mode ti check access mode for keys
|
> Hello,
>
> I am wondering if there is a function that checks the access mode for a key? I
> found the db_set_mode() function that allows me to set the access mode for a key,
> but failed to find its counterpart get function.
>
> Thanks in advance,
> Yan
KEY k;
db_get_key(hDB, handle, &k);
std::cout << k.access_mode << std::endl;
/Stefan |
13 Aug 2020, Yan Liu, Suggestion, adding db_get_mode ti check access mode for keys
|
Thank you!
Yan
> > Hello,
> >
> > I am wondering if there is a function that checks the access mode for a key? I
> > found the db_set_mode() function that allows me to set the access mode for a key,
> > but failed to find its counterpart get function.
> >
> > Thanks in advance,
> > Yan
>
>
> KEY k;
> db_get_key(hDB, handle, &k);
> std::cout << k.access_mode << std::endl;
>
> /Stefan |
07 Aug 2020, Konstantin Olchanski, Info, update of MYSQL history documentation
|
I updated the documentation for setting up a MYSQL (MariaDB) database for
recording MIDAS history: https://midas.triumf.ca/MidasWiki/index.php/History_System#Write_MYSQL-history_events
One thing to note: the "writer" user must have the "INDEX" permission, otherwise
many things will not work correctly.
Included are the instructions for importing exiting *.hst history files into the
SQL database: mh2sql --mysql mysql_writer.txt *.hst
Let me know if there is interest in adding support for writing into Postgres SQL
database. We used to support both MySQL and Postgres through the ODBC library,
but in the new code, each database has to be supported through it's native API.
There is code for SQLITE, MYSQL, but no code for Postgres, although it is not too
hard to add.
K.O. |
15 Jul 2020, Stefan Ritt, Info, Minimal CMakeLists.txt for your midas front-end
|
Since a few people asked me, here is a "minimal" CMakeLists.txt file for a user-written front-end
program "myfe":
---------------------------
cmake_minimum_required(VERSION 3.0)
project(myfe)
# Check for MIDASSYS environment variable
if (NOT DEFINED ENV{MIDASSYS})
message(SEND_ERROR "MIDASSYS environment variable not defined.")
endif()
set(CMAKE_CXX_STANDARD 11)
set(MIDASSYS $ENV{MIDASSYS})
if (${CMAKE_SYSTEM_NAME} MATCHES Linux)
set(LIBS -lpthread -lutil -lrt)
endif()
add_executable(myfe myfe.cxx)
target_include_directories(myfe PRIVATE ${MIDASSYS}/include)
target_link_libraries(crfe ${MIDASSYS}/lib/libmfe.a ${MIDASSYS}/lib/libmidas.a ${LIBS}) |
28 Jun 2020, Konstantin Olchanski, Info, Makefile update
|
I reworked the MIDAS Makefile to simplify things and to remove redundancy with functions
provided by cmake.
When you say "make", the list of options is printed.
The first and main options are "make cmake" and "make cclean" to run the cmake build.
This is my recommended way to build midas - the output of "make cmake" was tuned to provide
the information need to debug build problems (all compiler commands, command line switches
and file paths are reported). (normal "cmake VERBOSE=1" is tuned for debugging of cmake and
for maximum obfuscation of problems building the actual project).
Build options are implemented through cmake variables:
options that can be added to "make cmake":
NO_LOCAL_ROUTINES=1 NO_CURL=1
NO_ROOT=1 NO_ODBC=1 NO_SQLITE=1 NO_MYSQL=1 NO_SSL=1 NO_MBEDTLS=1
NO_EXPORT_COMPILE_COMMANDS=1
for example "make cmake NO_ROOT=1" to disable auto-detection of ROOT.
Two more make targets create reduced builds of midas:
"make mini" builds a subset of midas suitable for building frontend programs. Big programs
like mlogger and mhttpd are excluded, optional components like CURL or SQLITE are not needed.
"make remoteonly" builds a subset of midas suitable for building remotely connected
frontends. Big parts of midas are excluded, many system-dependent functions are excluded,
etc. This is intended for embedded applications, such as fpga, uclinux, etc.
But wait, there is more. Here is the full list:
daqubuntu:midas$ make
Usage:
make cmake --- full build of midas
make cclean --- remove everything build by make cmake
options that can be added to "make cmake":
NO_LOCAL_ROUTINES=1 NO_CURL=1
NO_ROOT=1 NO_ODBC=1 NO_SQLITE=1 NO_MYSQL=1 NO_SSL=1 NO_MBEDTLS=1
NO_EXPORT_COMPILE_COMMANDS=1
make dox --- run doxygen, results are in ./html/index.html
make cleandox --- remove doxygen output
make htmllint --- run html check on resources/*.html
make test --- run midas self test
make mbedtls --- enable mhttpd support for https via the mbedtls https library
make update_mbedtls --- update mbedtls to latest version
make clean_mbedtls --- remove mbedtls from this midas build
make mtcpproxy --- build the https proxy to forward root-only port 443 to mhttpd https
port 8443
make mini --- minimal build, results are in linux/{bin,lib}
make cleanmini --- remove everything build by make mini
make remoteonly --- minimal build, remote connetion only, results are in linux-
remoteonly/{bin,lib}
make cleanremoteonly --- remove everything build by make remoteonly
make linux32 --- minimal x86 -m32 build, results are in linux-m32/{bin,lib}
make clean32 --- remove everything built by make linux32
make linux64 --- minimal x86 -m64 build, results are in linux-m64/{bin,lib}
make clean64 --- remove everything built by make linux64
make linuxarm --- minimal ARM cross-build, results are in linux-arm/{bin,lib}
make cleanarm --- remove everything built by make linuxarm
make clean --- run all 'clean' commands
daqubuntu:midas$
K.O. |
15 Jul 2020, Stefan Ritt, Info, Makefile update
|
Please note that you can also compile midas in the standard cmake way with
$ mkdir build
$ cd build
$ cmake ..
$ make install
in the root midas directory. You might have to use "cmake3" on some systems.
Stefan |
28 Jun 2020, Konstantin Olchanski, Info, mhttpd https support openssl -> mbedtls
|
For password protection of midas web pages, https is required, good old http
with passwords transmitted in-the-clear is no longer considered secure. Latest
recommendation is to run mhttpd behind an industry-standard https proxy, for
example apache httpd. These proxies provide built-in password protection and
have integration with certbot to provide automatic renewal of https
certificates.
That said, for a long time now mhttpd provides native https support through the
mongoose web server library and the openssl cryptography library.
Unfortunately, for years now, we have been running into trouble with the midas
build process bombing out due to inconsistent versions and locations of system-
provided and user-installed openssl libraries. Despite our best efforts (and
through the switch to cmake!) these problems keep coming back and coming back.
Luckily, latest versions of mongoose support the mbedtls cryptography library. I
have tested it and it works well enough for me to switch the MIDAS default build
from "openssl if found" to "mbedtls if-asked-for-by-user".
Starting with commit e7b02f9, cmake builds do not look for and do not try to use
openssl. mhttpd is built without support for https. This is consistent with the
recommendation to run it behind an apache httpd password protected https proxy.
To enable https support using mbedtls, run "make mbedtls". This will "git clone"
the mbedtls library and add it to the midas build. mhttpd will be built with
https support enabled.
To disable mbedtls support, use "make cmake NO_MBEDTLS=1" or run "make
clean_mbedtls" (this will remove the mbedtls sources from the midas build).
To restore previous use of openssl, set the cmake variable "USE_OPENSSL".
In my test, mhttpd with https through mbedtls and a letsencrypt certificate gain
a score of "A" from SSLlabs. (very good).
(you have to use progs/mtcproxy to run this test - SSLlabs only probe port 443
and mtcproxy will forward it to mhttpd port 8443. to build, run "make
mtcpproxy").
References:
https://github.com/cesanta/mongoose
https://github.com/ARMmbed/mbedtls
K.O. |
28 Jun 2020, Konstantin Olchanski, Info, mhttpd https support openssl -> mbedtls
|
To add. Using https with either openssl or mbedtls requires obtaining an https certificate. This can be self-
signed, or signed by a higher authority, or issued by the "let's encrypt" project.
mhttpd is looking for this certificate in the file ssl_cert.pem.
If this file does not exist, mhttpd will print the instructions for creating it using openssl (self-signed) or
using certbot (instantaneously and automatically issued let's encrypt certificate).
The certbot route is recommended:
1) (as root) setup certbot (i.e. see my CentOS and Ubuntu instructions on DAQWiki)
2) (as root) copy /etc/letsencrypt/live/$HOME/fullchain.pem and privkey.pem to $MIDASSYS
3) cat fullchain.pem privkey.pem > ssl_cert.pem
4) start mhttpd, watch the first few lines it prints to confirm it found the right certificate file.
The only missing piece for using this in production is lack of integration
with certbot automatic certificate renewal:
- a script has to run for steps (2) and (3) above
- mhttpd has to tell openssl/mbedtls to reload the certificate file (alternative is to automatically restart
mhttpd, bad!).
As an alternative, we can wait for the mongoose web server library and for the mbedtls crypto library to "grow"
certbot-style automatic certificate renewal features. (unavoidable, in my view).
K.O. |
24 Jun 2020, Stefan Ritt, Info, New image history system available
|
I'm happy to report that the Corona Lockdown in Europe also had some positive side
effects: Finally I found time to implement an image history system in midas,
something I wanted to do since many years, but never found time for that.
The idea is that you can incorporate any network-connected WebCam into the midas
history system. You specify an update interval (like one minute) and the logger
fetches regularly images from that webcam. The images are stored as raw files in
the midas history directory, and can be retrieved via the web browser similarly to
the "normal" history. Attached is an image from the MEG Experiment at PSI to give
you some idea.
The cool thing now is that you can go "backwards" in time and browse all stored
images. The buttons at each image allow you to step backward, forward, and play a
movie of images, forward or backward. You can query for a certain date/time and
download a specific image to your local disk. You can even synchronize all time
axes, drag left and right on each image to see your experiment from different
cameras at the same time stamps. You see a blue ribbon below each image which shows
time stamps for which an image is available.
Initially, only the most recent image is loaded to speed up loading time. As soon
as you click on the image or one of the arrow buttons, previous images are loaded
progressively, which you can see in the ribbon bar becoming blue. For slow internet
connections this can take some time. For typical webcams and one minute update
period you get typically a few GB per week.
To make this happen, you define a new ODB subtree
/History/Images/<name>/
Name: Name of Camera
Enabled: Boolean to enable readout of camera
URL URL to fetch an image from the camera
Period Time period in seconds to fetch a new image
Storage hours Number of hours to store the images (0 for infinite)
Extension Image file extension, usually ".jpg" or ".png"
Timescale Initial horizontal time scale (like 8h)
The tricky part is to obtain the URL from your camera. For some cameras you can get
that from the manual, others you have to "hack": Display an image in your browser
using the camera's internal web interface, inspect the source code of your web page
and you get the URL. For AXIS cameras I use, the URL is typically
http://<name>/axis-cgi/jpg/image.cgi
For the Netatmo cameras I have at home (which I used during development in my home
office), the procedure is more complicated, but you can google it. The logger is
now linked against the CURL library to fetch images, so it also support https://.
If libcurl is not installed on your system, the image history functionality will be
disabled.
I tested the system for a few days now and it seem stable, which however does not
mean that it is bug-free. So please report back any issue. The change is committed
to the current develop branch.
I hope this extension helps all those people who are forced to do more remote
monitoring of experiment during these times.
Best,
Stefan |
|