Back Midas Rome Roody Rootana
  Midas DAQ System, Page 60 of 152  Not logged in ELOG logo
    Reply  31 Jan 2025, Pavel Murat, Forum, converting non-MIDAS slow control data into MIDAS history format ? 
I think I found an answer to my question: a user-controlled event header does have a time stamp: 

https://daq00.triumf.ca/MidasWiki/index.php/Event_Structure#Event_Header

-- apologies for the spam, regards, Pasha
    Reply  01 Feb 2025, Pavel Murat, Bug Report, MIDAS history system not using the event timestamps ? test_frontend.pyval_nightly.csv
> I have a time series of slow control measurements in an ASCII format - 
> data records in a format (run_number, time, temperature, voltage1, ..., voltageN), 
> and, if possible, would like to convert them into a MIDAS history format. 
> 
> Making MIDAS events out of that data is easy, but is it possible to preserve 
> the time stamps?  - Logically, this boils down to whether it is possible to have  
> the event time set by a user frontend

It looks that the original question was not as naive as I expected and may be pointing to a subtle bug. 
I have implemented a python frontend - essentially a clone of 

https://bitbucket.org/tmidas/midas/src/develop/python/midas/frontend.py

reading the old slow control data and setting the event.header.timestamp's to some dates from the year of 2022. 

When I run MIDAS and read the "old slow control events", one event in 10 seconds, 
the MIDAS Event Dump utility shows the data with the correct event timestamps, from the year of 2022. 

However the history plots show the event parameters with the timestamps from Feb 01 2025 and the adjacent 
data points separated by 10 sec.

Is it possible that the history system uses its own timestamp setting instead of using timestamps from the event headers? 
- Under normal circumstances, the two should be very close, and that could've kept the issue hidden... 

-- thanks, regards, Pasha

UPDATE:  I attached the frontend code and the input data file it is reading. The data file should reside in the local directory
- the frontend code doesn't have everything fully automated for the test, 
  -- an integer field "/Mu2e/Offline/Ops/LastTime" would need to be created manually
  -- the history plots would need to be declared manually
    Reply  01 Apr 2025, Pavel Murat, Bug Report, MIDAS history system not using the event timestamps ? 
Dear MIDAS experts, 

I confirm that when writing out history files corresponding to the slow control event data, 
MIDAS history system timestamps the data not with the event time coming from the event data, 
but with the current time determined by the program - 

https://bitbucket.org/tmidas/midas/src/293d27fad0c87c80c4ed7b94b5c40ba1e150bea4/progs/mlogger.cxx#lines-5321

where 'now' is defined as  

time_t now = time(NULL);

I'm looking for a way to timestamp the history data with the event time - that is important 
for HEP applications outside the DAQ domain. Yes, MIDAS infrastructure is very well suited for that, 
there could have a number of such applications, and experiments could significantly benefit from that.

So I'm wondering whether the implementation is a design choice made or it could be changed. 

The change itself and especially its validation may require a non-negligible amount of work - I'd be happy to contribute.

Any insight much appreciated. 

-- thanks, regards, Pasha
    Reply  01 Apr 2025, Pavel Murat, Suggestion, Sequencer ODBSET feature requests 
 I once looked at using LUA for this,
> but I think basing off an full featured programming language like python
> is better.

if it came to a vote, my vote would go to Lua: it would allow to do everything needed, 
with much less external dependencies and with much less motivation to over-use the interpreter. 
The CMS experience was very teaching in this respect... 

-- my 2c, regards, Pasha
    Reply  02 Apr 2025, Pavel Murat, Bug Report, MIDAS history system not using the event timestamps ? 
> You can always include your "true" data timestamp as the first value in your data.

Are you saying that if the first data word of a history event were a timestamp, 
the MIDAS history system, when plotting the time dependencies, would use that timestamp 
instead of the mlogger timestamp?  

if that is true, what tells MIDAS that the first data word is the timestamp? 

I couldn't find a discussion of that on the page describing the history system - 

 https://daq00.triumf.ca/MidasWiki/index.php/History_System#Frontend_history_event

- perhaps I should be looking at a different page?

-- thanks again, regards, Pasha 
Entry  29 Apr 2025, Pavel Murat, Bug Report, ODBXX : ODB links in the path names ? 
Dear MIDAS experts,

does the ODBXX interface to ODB currently ODB links in the path names? - From what I see so far, it currently fails to do so,
but I could be doing something else wrong... 

-- thanks, regards, Pasha
    Reply  30 Apr 2025, Pavel Murat, Info, New ODB++ API 
it is a very convenient interface! Does it support the ODB links in the path names ? -- thanks, regards, Pasha
Entry  24 May 2025, Pavel Murat, Info, ROOT scripting for MIDAS seems to work pretty much out of the box log.txt
Dear All, 

I'm pretty sure many know this already, however I found this feature by a an accident 
and want to share with those who don't know about it yet - seems very useful. 

- it looks that one can use ROOT scripting with rootcling and call from the 
  interactive ROOT prompt any function defined in midas.h and access ODB seemingly 
  WITHOUT DOING anything special 

- more surprisingly, that also works for odbxx, with one minor exception in handling 
  the 64-bit types - the proof is in attachment. The script test_odbxx.C loaded 
  interactively is Stefan's

 https://bitbucket.org/tmidas/midas/src/develop/examples/odbxx/odbxx_test.cxx 

with one minor change - the line 
 
   o[Int64 Key] = -1LL;

is replaced with

   int64_t x = -1LL;
   o["Int64 Key"] = x;

- apparently the interpeter has its limitations. 

My rootlogon.C file doesn't load any libraries, it only defines the appropriate 
include paths. So it seems that everything works pretty much out of the box. 

One issue has surfaced however. All that worked despite my experiment 
had its name="test_025", while the example specifies experiment="test". 
Is it possible that that only first 4 characters are being tested ? 

-- regards, Pasha
Entry  25 May 2025, Pavel Murat, Bug Report, subdirectory ordering in ODB browser ? panel_map.jsonpanel_map.png
Dear MIDAS experts, 

I'm running into a minor but annoying issue with the subdirectory name ordering by the ODB browser. 
I have a straw-man hash map which includes ODB subdirectories named "000", "010", ... "300", 
and I'm yet to succeed to have them displayed in a "natural" order: the subdirectories with names 
starting from "0" always show up on the bottom of the list - see attached .png file. 

Neither interactive re-ordering nor manual ordering of the items in the input .json file helps. 

I have also attached a .json file which can be loaded with odbedit to reproduce the issue. 

Although I'm using a relatively recent - ~ 20 days old - commit, 'db1819ac', is it possible 
that this issue has already been sorted out ?

-- many thanks, regards, Pasha  
Entry  27 May 2025, Pavel Murat, Suggestion, handling of 2+ like-long messages  
Dear MIDAS experts, 

currently, the MIDAS messaging system is optimized for one-line long messages, 
so the content of 2+liners shows up in the message log in the reverse order, 
with the first line on the bottom of the message. 

I wonder if printing the message content in the reverse order, starting from the last line, 
would make sense ? - that wouldn't affect one-line long messages, but could make longer 
messages more useful. 

--thanks, regards, Pasha
Entry  09 Dec 2003, Paul Knowles, , db_close_record non-local/non-return 
Hi All,

I have found a weird one:

The following code executes on the frontend machine in the
frontend_exit() routine, and connects to the odb running on
another separate machine:
...
     cm_msg(MINFO,__func__, "line %d", __LINE__);

     cm_get_experiment_database(&hdb, NULL);

     cm_msg(MINFO,__func__, "line %d", __LINE__);
     status = db_find_key(hdb, 0, "/Experiment/Run Parameters", &hkey);
     cm_msg(MINFO,__func__, "line %d, hkey=%d, status=%d",
            __LINE__, hkey, status);
     checkstat("db_find_key returned status %d", status);
     cm_msg(MINFO,__func__, "line %d", __LINE__);
     status = db_close_record(hdb, hkey);

     /* NOTREACHED!! the above call to db_close_record
        doesn't return!
      */
     cm_msg(MINFO,__func__, "line %d, status=%d", __LINE__, status);
     checkstat("db_close_record returned status %d", status);

checkstat is a macro that does the following:
#define checkstat(format, arg...)\
do{ if(status != DB_SUCCESS) {\
cm_msg(MERROR, __func__, format, ## arg);\
return FE_ERR_ODB;}}while(0)

The key exists, and the status of the search is 1
(i.e., DB_SUCCESS) and rest of the code tries to run.  What gets
really weird is that the db_close_record _doesn't_ _return_.
The code following the NOTREACHED comment just doesn't get
called.  I get the message from the __LINE__ just in front
of the call, but not the message afterwards (cm_msg and printf 
were tried).  Somehow db_close_record is causing a non-local 
exit or signal or something. No error message is printed and the 
frontend continues to exit with exit code 0.  But, since the rest
of my frontend_exit/odb closing doesn't happen, the odb is left in
a lost state requiring a cleanup.  If I comment out the calls to 
db_close_record, the rest of my frontend_exit runs normally 
and the cm_disconnect_experiment() in mfe.c eventually closes my 
open records correctly (I expect, anyway) and this is the present 
workaround i am using.  The terror i have is that several of my 
hotlinked callback routines will call the close_record routine 
when resetting illegal values.  No end of hilarity will result there...

I was using the same code in the frontend under 1.9.2 and
have only recently upgraded to 1.9.3-? tarball from PAA and 
there were no problems using the 1.9.2 code: this is a 1.9.3
issue.

I have localized the weirdness to what I think is the RPC interface.
Running the nullfrontend (no camac access) on the same machine as 
hosts the ODB I can make the problem appear and disappear in the 
following way:
(odb is local on machine ``monet'')

nullfe -h monet -e acqmonad     : db_close_record will get lost

nullfe -e acqmonad              : db_close_record works as expected.

I've tried also with the patch for the 256 byte odb string bug since
many of the open records have strings of that length, but that isn't
it. The only substancial looking change to mserver from 1.9.2 to 1.9.3
is the SIGPIPE ignore and that doesn't look like a good candidate either.
Can this be that some of the 
   #IFDEF LOCAL_ROUTINES
that got moved about in odb.c and others
are causing the remote call to get confused?

Clearly the answer is to just use stable and happy 1.9.2, but the 
people for whom I am working now really want to use ROOT for
an analyzer...


cheers,
.p.

Paul Knowles.                   phone: 41 26 300 90 64
email: Paul.Knowles@unifr.ch      Fax: 41 26 300 97 47
finger me at pexppc33.unifr.ch for more contact information
    Reply  18 Dec 2003, Paul Knowles, , Poll about default indent style  
Hi Stefan,

> once and forever, I am considering using the "indent" program which comes 
> with every linux installation. Running indent regularly on all our code 
> ensures a consistent look.

I think this can be called a Good Thing.

> The "-kr" style does the standard K&R style, 
> but used tabs (which is not good), and does a 4-column 
> indention which is I think too much. So I would propose 
> following flags:
>        indent -kr -nut -i2 -di8 -bad <filename.c>

(some of this is a repeat from an earlier mail to SR):
You might also want a -l90 for a longer line length than 75
characters.  K&R style with indentation from 5 to 8 spaces
is a good indicator of complexity: as soon as 40 characters
of code wind up unreadably squashed to the right of the
screen, you have to refactor to have less indentation
levels.  This means you wind up rolling up the inner parts
of deeply nested conditionals or loops as separate
functions, making the whole code easier to understand.

I think that setting -i2 is ``going around the problem'' 
of deep nesting.  If you really need to keep the indentation 
tabs less than 4 (8 is ideal) because your code is falling off the 
right edge of the screen, you are indented too deeply.  Why do 
I say that?  There is the famous ``7+-1'' idea that you can hold
in you head only 7 ideas (give or take one) at any time.  I'm not 
that smart and I top out at about 5:  So for example, a conditional 
in a loop  in a conditional in a switch is about as deep a level 
of nesting as  I can easily understand (remember that I also have 
to hold the line i'm working on as well): that's 4 levels, plus one for the
function itself and we are at 40 characters away from the right edge
of the screen using -i8 and have some 40 characters available for writing code
(how often is a line of code really longer than about 40 characters?).
On top of that, the indentation is easily seen so you know immediately 
wheather you are at the upper conditional, or inner conditional.  A -i2
just doesn't make the difference big enough.  -i5 is a happy balance 
with enough visual clue as to the indentation level, but leaves you 50
to 60 characters for the code line itself.

However, if you are indenting very deeply, then the poor reader can't hold
on to the context: there are more than 6 or 7 things to keep in mind.
In those cases, roll up the inner levels as a separate function and 
call it that way. The inner complexity of the nested statements gets 
nicely abstracted and then dumb people like me can understand what 
you are doing.

So, in brief: indent is a good idea, and -in with n>=4 will be best.
I don't think -i2 will lend itself to making the code so much easier 
to read.

thanks for listening.
.p.
Entry  07 Aug 2019, Paolo Baesso, Bug Report, ROOTANA bug? 
Hi,

I posted on the ROOTANA elog but there seems to be little activity there...

Could someone confirm if this is a bug?
https://midas.triumf.ca/elog/Rootana/14

Another user replied that they are encountering the same issue, so I think it is unlikely it is just our installation.

While ROOTANA is unusable for us, I tried to use the example Frontend and Analyzer (under the Experiment source folder). The analyzer does not seem to do much though. A root file is produced but nothing is placed into it. Is that normal?

Any help would be welcome.
Entry  21 May 2024, Nikolay, Bug Report, experiment from midas/examples analyzer.jpg
There are 2 bugs in midas/examples/experiment:

1) In fronted bank named "PRDC" is created for scaler event. But in analyzer 
module scaler.cxx the bank named "SCLR" is searched for the same event.

2) In mana.cxx linked from analyzer.cxx is "Invalid name "/Analyzer/Tests/Always 
true/Rate [Hz]" passed to db_create_key: should not contain "["". 
Looks like ODB doesn't like '[', ']' characters.
    Reply  02 May 2023, Niklaus Berger, Forum, Problem with running midas odbxx frontends on a remote machine using the -h option 
Thanks for all the helpful hints. When finally managing to evade all timeouts and attach the debugger in just the right moment, we find that we get a segfault in mserver at L827:
   case RPC_DB_COPY_XML:
      status = db_copy_xml(CHNDLE(0), CHNDLE(1), CSTRING(2), CPINT(3), CBOOL(4));
Some printf debugging then pointed us to the fact that the culprit is the pointer de-referencing in CBOOL(4). This in turn can be traced back to mrpc.cxx L282 ff, where the line with the arrow was missing:
  {RPC_DB_COPY_XML, "db_copy_xml",
    {{TID_INT32, RPC_IN},
     {TID_INT32, RPC_IN},
     {TID_ARRAY, RPC_OUT | RPC_VARARRAY},
     {TID_INT32, RPC_IN | RPC_OUT},
 ->  {TID_BOOL, RPC_IN},
     {0}}},

If we put that in, the mserver process completes peacfully and we get a segfault in the client ("Wrong key type in XML file") which we will attempt to debug next. Shall I create a pull request for the additional RPC argument or will you just fix this on the fly?
    Reply  02 May 2023, Niklaus Berger, Forum, Problem with running midas odbxx frontends on a remote machine using the -h option 
And now we also fixed the client segfault, odb.cxx L8992 also needs to know about the header:
if (rpc_is_remote())
      return rpc_call(RPC_DB_COPY_XML, hDB, hKey, buffer, buffer_size, header);

(last argument was missing before).
Entry  26 Jul 2019, Nik Berger, Bug Report, History/Endianness 
Hi,
I have a bank of floats with slow control values that I store to the history and
ODB. When reading the history, both in the webbrowser and with mhist, the floats
get read with the wrong endianness; under /equipment/variables in the ODB they
however display correctly. System is a an intel OpenSuse linux box. Any ideas?

Thanks

Nik
Entry  06 Oct 2019, Nik Berger, Bug Report, History data size mismatch 
Logging a list of variables to the history via links in the history ODB subtree,
we get messages as follows at every run start:

19:43:24.009 2019/10/06 [Logger,ERROR] [history_schema.cxx:2676:hs_write_event,ERROR] Event 'System' data size mismatch: expected 412 bytes, got 416 bytes

19:43:24.008 2019/10/06 [Logger,ERROR] [history_schema.cxx:2676:hs_write_event,ERROR] Event 'System' data size mismatch: expected 412 bytes, got 416 bytes

19:43:23.850 2019/10/06 [Logger,ERROR] [history_schema.cxx:455:hs_write_event,ERROR] Event 'System' data size mismatch count: 25, expected 412 bytes, hs_write_event() called with as much as 416 bytes

19:43:23.850 2019/10/06 [Logger,ERROR] [history_schema.cxx:455:hs_write_event,ERROR] Event 'System' data size mismatch count: 25, expected 412 bytes, hs_write_event() called with as much as 416 bytes

The history calculates the size of a record from the size of the individual variables, (history_schema.cxx, L2666 ff), whereas the ODB delivers the data aligned/padded to the size of the largest value in the record.
In our history, a long list of doubles (64 Bit) fas followed by three floats (32 bit), leading to a padded response from the ODB, 4 byte longer than the history expects.
Quick fix: Add another 32 bit dummy variable to the history. Gets rid of the error messages...
Should probably be fixed at a deeper level...
    Reply  10 Oct 2019, Nik Berger, Bug Report, History data size mismatch 
>I wonder why do you this via ODB links. The "standard" way of writing to the history should be to create events for an equipment and flag this equipment as being written to the
>history. All variables under /Equipment/<name>/Variables then automatically go into the history and you don't have to worry about ODB links. Only variables not fitting the
>equipment/variables scheme should be dealt with via ODB links, like variables under equipment/statistics or parameters in another ODB tree. In a typical midas experiment, only
>very few variables typically go into the 'System' event. This is however probably not a solution to your problem. If you have a similar structure (doubles plus an odd number of floats)
>under 'variables', you might get the same error.

>
> In our history, a long list of doubles (64 Bit) fas followed by three floats (32 bit)
>

We do this in the MuX DAQ and mix things that come directly from MIDAS (the MIDAS trigger rate) and things from the
analyzer (rates in the self-triggering detectors) and some temperatures from yet somewhere else. Yes, we could have
kept that apart, yes, in this case a double would also work (and not break things), but a bug is a bug...
I could think of senisble use cases where doubles and ints are mixed and I also know quite a few areas where it makes
sense to use floats...

Nik
Entry  10 Jun 2025, Nik Berger, Bug Report, History variables with leading spaces 
By accident we had history variables with leading spaces. The history schema check then decides that this is a new variable (the leading space is not read from the history file) and starts a new file. We found this because the run start became slow due to the many, many history files created. It would be nice to just get an error if one has a malformed variable name like this.

How to reproduce: Try to put a variable with a leading space in the name into the history, repeatedly start runs.
Sugested fix: Produce an error if a history variable has a leading space.
ELOG V3.1.4-2e1708b5