Back Midas Rome Roody Rootana
  Midas DAQ System, Page 48 of 146  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
ID Date Author Topic Subjectdown
  2547   23 Jun 2023 Stefan RittBug Reportdeferred stop transition
> I'm in the same situation ?

Yepp.
  2548   26 Jun 2023 Gennaro TortoneBug Reportdeferred stop transition
> Deferred transitions were only implemented with a single instance of a program deferring the 
> transition. To have several instances, MIDAS probably needs to be extended. Certainly this 
> was never tested, so it's not a surprise that we get a segmentation fault.
> 
> Stefan

Hi Stefan,

I copied deferredfe.cxx to mydeferredfe.cxx and I changed mydeferred.cxx to be a different frontend:

const char *frontend_name = "mydeferredfe";

If I start two "different" frontends:

./deferredfe
./mydeferredfe 

and try to start/stop a run... the result is the same: frontend status messing up on next 'start':

---

deferredfe:

Started run 332
Event ID:4 - Event#: 0
Event ID:4 - Event#: 1
Event ID:4 - Event#: 2
Event ID:4 - Event#: 3
Event ID:4 - Event#: 4
Event ID:4 - Event#: 5
Event ID:4 - Event#: 6

mydeferredfe:

Started run 332
Transition ignored, Event ID:2 - Event#: 0
Transition ignored, Event ID:2 - Event#: 1
Transition ignored, Event ID:2 - Event#: 2
Transition ignored, Event ID:2 - Event#: 3
Transition ignored, Event ID:2 - Event#: 4
End of cycle... perform transition
Event ID:2 - Event#: 5
End of cycle... perform transition
Event ID:2 - Event#: 6

---

so, it seems that the issue is not related to different 'instances' of same frontend but
that *at most* one frontend on whole MIDAS server can handle deferred transitions...

is this the case ?

Cheers,
Gennaro 
  2549   26 Jun 2023 Stefan RittBug Reportdeferred stop transition
> so, it seems that the issue is not related to different 'instances' of same frontend but
> that *at most* one frontend on whole MIDAS server can handle deferred transitions...
> 
> is this the case ?


Correct.

- Stefan
  2617   06 Oct 2023 Konstantin OlchanskiInfodefault midas history switched to "FILE" and "PerVariable" history
We are very happy with the "FILE" implementation of MIDAS history and it is time 
to make it the default for new experiments. This history driver works best if 
"per variable" history is alos enabled. (SQL history already only works in "per-
variable" mode). commit 676051b3024965bd8a04da112965a141d5f61a39
K.O.
  1115   23 Sep 2015 Peter KravtsovForumdb_paste_node error in offline analyzer
I have a problem with using analyzer offline.
I'm trying to do it this way:

[lkst@pklinux online]$ kill_daq.sh
[lkst@pklinux online]$ odbedit -c "load data/run00020.odb"
[ODBEdit,INFO] Reloading RPC hosts access control list via hotlink callback
[lkst@pklinux online]$ analyzer -i data/run00020.mid -o test20.root
Root server listening on port 9090...
Running analyzer offline. Stop with "!"
[Analyzer,INFO] Set run number 20 in ODB
analyzer: src/odb.c:6631: db_paste_node: Assertion `status == 1' failed.
Load ODB from run 20...Aborted
[lkst@pklinux online]$

I always get this "Assertion `status == 1' failed." error even
if I try it in the examples/experiment enclosed in MIDAS distribution.
After this try I can not run any midas program, even odbedit reports an error,
until I delete the ODB file in /dev/shm/ and load last ODB dump with odbedit.
What do I do wrong and how this can be fixed?
  1117   25 Sep 2015 Konstantin OlchanskiForumdb_paste_node error in offline analyzer
> I have a problem with using analyzer offline.
...
> [Analyzer,INFO] Set run number 20 in ODB
> analyzer: src/odb.c:6631: db_paste_node: Assertion `status == 1' failed.

The problem has been traced to unexpected interaction between code in mana.c and 
aggressive error checking in the odb xml decoder (used to load odb.xml dump from the midas 
data file into odb).

We are working on a solution.

K.O.
  156   13 Oct 2004 Konstantin OlchanskiBug Reportdb_paste: found string exceeding MAX_STRING_LENGTH
I am updating TWIST to the latest MIDAS and when I load a saved .odb file, I get
these messages. Their text ought to say where and what strings it does not like.
K.O.



[twistonl@midtwist ~/online]$ odbedit
Please define environment variable 'MIDASSYS'
pointing to the midas installation directory.
[local:twist:S]/>load /twist/data_onl/current/run17548.odb
[odb.c:5600:db_paste] found string exceeding MAX_STRING_LENGTH
[odb.c:5600:db_paste] found string exceeding MAX_STRING_LENGTH
[odb.c:5600:db_paste] found string exceeding MAX_STRING_LENGTH
  157   13 Oct 2004 Stefan RittBug Reportdb_paste: found string exceeding MAX_STRING_LENGTH
Can you attach 

/twist/data_onl/current/run17548.odb

so I can reproduce the problem?
  80   09 Dec 2003 Paul Knowles db_close_record non-local/non-return
Hi All,

I have found a weird one:

The following code executes on the frontend machine in the
frontend_exit() routine, and connects to the odb running on
another separate machine:
...
     cm_msg(MINFO,__func__, "line %d", __LINE__);

     cm_get_experiment_database(&hdb, NULL);

     cm_msg(MINFO,__func__, "line %d", __LINE__);
     status = db_find_key(hdb, 0, "/Experiment/Run Parameters", &hkey);
     cm_msg(MINFO,__func__, "line %d, hkey=%d, status=%d",
            __LINE__, hkey, status);
     checkstat("db_find_key returned status %d", status);
     cm_msg(MINFO,__func__, "line %d", __LINE__);
     status = db_close_record(hdb, hkey);

     /* NOTREACHED!! the above call to db_close_record
        doesn't return!
      */
     cm_msg(MINFO,__func__, "line %d, status=%d", __LINE__, status);
     checkstat("db_close_record returned status %d", status);

checkstat is a macro that does the following:
#define checkstat(format, arg...)\
do{ if(status != DB_SUCCESS) {\
cm_msg(MERROR, __func__, format, ## arg);\
return FE_ERR_ODB;}}while(0)

The key exists, and the status of the search is 1
(i.e., DB_SUCCESS) and rest of the code tries to run.  What gets
really weird is that the db_close_record _doesn't_ _return_.
The code following the NOTREACHED comment just doesn't get
called.  I get the message from the __LINE__ just in front
of the call, but not the message afterwards (cm_msg and printf 
were tried).  Somehow db_close_record is causing a non-local 
exit or signal or something. No error message is printed and the 
frontend continues to exit with exit code 0.  But, since the rest
of my frontend_exit/odb closing doesn't happen, the odb is left in
a lost state requiring a cleanup.  If I comment out the calls to 
db_close_record, the rest of my frontend_exit runs normally 
and the cm_disconnect_experiment() in mfe.c eventually closes my 
open records correctly (I expect, anyway) and this is the present 
workaround i am using.  The terror i have is that several of my 
hotlinked callback routines will call the close_record routine 
when resetting illegal values.  No end of hilarity will result there...

I was using the same code in the frontend under 1.9.2 and
have only recently upgraded to 1.9.3-? tarball from PAA and 
there were no problems using the 1.9.2 code: this is a 1.9.3
issue.

I have localized the weirdness to what I think is the RPC interface.
Running the nullfrontend (no camac access) on the same machine as 
hosts the ODB I can make the problem appear and disappear in the 
following way:
(odb is local on machine ``monet'')

nullfe -h monet -e acqmonad     : db_close_record will get lost

nullfe -e acqmonad              : db_close_record works as expected.

I've tried also with the patch for the 256 byte odb string bug since
many of the open records have strings of that length, but that isn't
it. The only substancial looking change to mserver from 1.9.2 to 1.9.3
is the SIGPIPE ignore and that doesn't look like a good candidate either.
Can this be that some of the 
   #IFDEF LOCAL_ROUTINES
that got moved about in odb.c and others
are causing the remote call to get confused?

Clearly the answer is to just use stable and happy 1.9.2, but the 
people for whom I am working now really want to use ROOT for
an analyzer...


cheers,
.p.

Paul Knowles.                   phone: 41 26 300 90 64
email: Paul.Knowles@unifr.ch      Fax: 41 26 300 97 47
finger me at pexppc33.unifr.ch for more contact information
  81   12 Dec 2003 Stefan Ritt db_close_record non-local/non-return
Hi Paul,

sorry my late reply, I had to find some time for debugging your problem. 
Thank you very much for the detailed description of the problem, I wish all 
bug reports would be such elaborate!

You were right that there was a bug in the RPC system. The function 
db_remove_open_record() got a new parameter recently, which was not changed 
in the RPC call, and caused the mserver side to crash on any 
db_close_record() call.

I fixed it and the update is under CVS (http://midas.psi.ch/cgi-
bin/cvsweb/midas/src/). Since you need to update many files, I wonder if I 
should enable anonymous CVS read access. Does anybody know how to set this 
up using "ssh" as the protocol (via CVS_RSH=ssh)?

Please note that db_close_record() is not necessary as 
cm_disconnect_experiment() takes care of this, but having it there does not 
hurt.
  968   23 Feb 2014 William PageForumdb_check_record() for verifying structure of ODB subtree
Hi,

I have been trying to use db_check_record() in order to verify that a subtree in the ODB has the correct 
variables, variable order, and overall size. I'm going off the documentation 
(https://midas.psi.ch/htmldoc/group__odbfunctionc.html) and use a string to compare against the ODB 
structure.  Since the string format is not specified for db_check_record(), I'm formatting my string 
according to the db_create_record() example.

Instead of db_check_record() checking the entire ODB subtree against all the variables represented in the 
string, I'm finding that only the first variable is checked.  The later variables in the string can be 
misspelled, out of order, or inexistent, and db_check_record() will still return 1.

Am I using db_check_record incorrectly?  

Thank you for any help with this issue.


I also believe that some of the documentation for db_check_record is outdated.  For example, init_string 
is referenced in the documentation but isn't part of the function definition.
  1966   10 Aug 2020 Ivo SchulthessBug Reportdata missing in runXXXXXX.mid
Dear all

We just started our beam time at ILL and just found yesterday that for certain 
settings of our detector the data is not saved into the .mid files. Running "mdump 
-l 10" online we see the data coming in as they should. Nevertheless, if we run 
"mdump -x runXXXXXX.mid" offline, the data file has no events and the banks are 
missing. Any ideas where the data could go lost?

Thanks in advance,
Ivo
  1967   10 Aug 2020 Stefan RittBug Reportdata missing in runXXXXXX.mid
> Dear all
> 
> We just started our beam time at ILL and just found yesterday that for certain 
> settings of our detector the data is not saved into the .mid files. Running "mdump 
> -l 10" online we see the data coming in as they should. Nevertheless, if we run 
> "mdump -x runXXXXXX.mid" offline, the data file has no events and the banks are 
> missing. Any ideas where the data could go lost?
> 
> Thanks in advance,
> Ivo

Have you checked 

/Logger/Channels/0/Settings/Event ID = -1
/Logger/Channels/0/Settings/Trigger mask = -1

If these settings are not -1, they filter the data stream for certain events and trigger 
masks.

Stefan
  1968   10 Aug 2020 Ivo SchulthessBug Reportdata missing in runXXXXXX.mid
> > Dear all
> > 
> > We just started our beam time at ILL and just found yesterday that for certain 
> > settings of our detector the data is not saved into the .mid files. Running "mdump 
> > -l 10" online we see the data coming in as they should. Nevertheless, if we run 
> > "mdump -x runXXXXXX.mid" offline, the data file has no events and the banks are 
> > missing. Any ideas where the data could go lost?
> > 
> > Thanks in advance,
> > Ivo
> 
> Have you checked 
> 
> /Logger/Channels/0/Settings/Event ID = -1
> /Logger/Channels/0/Settings/Trigger mask = -1
> 
> If these settings are not -1, they filter the data stream for certain events and trigger 
> masks.
> 
> Stefan

Good morning Stefan

Both set to -1. We only have one logging channel. If we run a sequence with a few runs and the 
same settings, sometimes data is in the .mid file and sometimes it is not.

Best,
Ivo 
  1969   10 Aug 2020 Stefan RittBug Reportdata missing in runXXXXXX.mid
> Both set to -1. We only have one logging channel. If we run a sequence with a few runs and the 
> same settings, sometimes data is in the .mid file and sometimes it is not.

Then I'm running out of ideas. Things I would check:

- Are the file sizes about the same? 

- When you dump the .mid file, you do you see your bank names? 

This would tell you if the events are really missing or if mdump would just not find them.

But I guess without being able to debug the system at ILL I cannot be of any more help. You are the 
first one reporting such a problem, so it must have to do with your local setup.

Stefan
  1970   10 Aug 2020 Ivo SchulthessBug Reportdata missing in runXXXXXX.mid
> Then I'm running out of ideas. Things I would check:
> 
> - Are the file sizes about the same? 
> 
> - When you dump the .mid file, you do you see your bank names? 
> 
> This would tell you if the events are really missing or if mdump would just not find them.
> 
> But I guess without being able to debug the system at ILL I cannot be of any more help. You are the 
> first one reporting such a problem, so it must have to do with your local setup.
> 
> Stefan

So I did a quick check. The file size is about the same (322K and 329K). When I dump the .mid I don't see 
the banks. It only prints two lines with "------ Event# 0 ------" and "------ Event# 1 ------" whereas for 
the file with data I get the two banks with all the data. Our online analyzer also fails to see the banks. 
Is there another way to check what is in the .mid file?

Best,
Ivo
  1971   10 Aug 2020 Stefan RittBug Reportdata missing in runXXXXXX.mid
> So I did a quick check. The file size is about the same (322K and 329K). When I dump the .mid I don't see 
> the banks. It only prints two lines with "------ Event# 0 ------" and "------ Event# 1 ------" whereas for 
> the file with data I get the two banks with all the data. Our online analyzer also fails to see the banks. 
> Is there another way to check what is in the .mid file?

with "dump" I meant a true object dump like "hexdump -C run000001.mid". I produced a file with ADC0 and TDC0 
banks (that's the example from the distribution under exampels/experiments/frontend.cxx), and I get

....
00024220  01 00 00 00 41 44 43 30  04 00 08 00 eb 06 35 04  |....ADC0......5.|
00024230  31 09 4f 06 54 44 43 30  04 00 08 00 93 04 fb 07  |1.O.TDC0........|
00024240  5c 09 88 0b 01 00 00 00  01 00 00 00 2a 0b 31 5f  |\...........*.1_|
00024250  28 00 00 00 20 00 00 00  01 00 00 00 41 44 43 30  |(... .......ADC0|
00024260  04 00 08 00 c3 09 24 05  85 05 f3 06 54 44 43 30  |......$.....TDC0|
00024270  04 00 08 00 88 08 2d 03  3b 0d d6 02 01 00 00 00  |......-.;.......|
00024280  02 00 00 00 2a 0b 31 5f  28 00 00 00 20 00 00 00  |....*.1_(... ...|
00024290  01 00 00 00 41 44 43 30  04 00 08 00 a5 0a 69 09  |....ADC0......i.|

where you clearly see the ADC0 and TDC0 banks.

Stefan
  1972   10 Aug 2020 Ivo SchulthessBug Reportdata missing in runXXXXXX.mid
> with "dump" I meant a true object dump like "hexdump -C run000001.mid". I produced a file with ADC0 and TDC0 
> banks (that's the example from the distribution under exampels/experiments/frontend.cxx), and I get
> 
> ....
> 00024220  01 00 00 00 41 44 43 30  04 00 08 00 eb 06 35 04  |....ADC0......5.|
> 00024230  31 09 4f 06 54 44 43 30  04 00 08 00 93 04 fb 07  |1.O.TDC0........|
> 00024240  5c 09 88 0b 01 00 00 00  01 00 00 00 2a 0b 31 5f  |\...........*.1_|
> 00024250  28 00 00 00 20 00 00 00  01 00 00 00 41 44 43 30  |(... .......ADC0|
> 00024260  04 00 08 00 c3 09 24 05  85 05 f3 06 54 44 43 30  |......$.....TDC0|
> 00024270  04 00 08 00 88 08 2d 03  3b 0d d6 02 01 00 00 00  |......-.;.......|
> 00024280  02 00 00 00 2a 0b 31 5f  28 00 00 00 20 00 00 00  |....*.1_(... ...|
> 00024290  01 00 00 00 41 44 43 30  04 00 08 00 a5 0a 69 09  |....ADC0......i.|
> 
> where you clearly see the ADC0 and TDC0 banks.
> 
> Stefan

So at least I learned something new. I tried it with the hexdump and the banks are not existent in the .mid file. I 
only have the ODB inside the file. The 7K difference in size is actually just about what I expect to be the data 
(1792 x 4 bytes)

Best, Ivo
  1973   10 Aug 2020 Stefan RittBug Reportdata missing in runXXXXXX.mid
Have you tried longer files? Maybe a few 100 MB or so. Maybe a buffer is not flushed correctly at the end of a run.
  1975   10 Aug 2020 Ivo SchulthessBug Reportdata missing in runXXXXXX.mid
> Have you tried longer files? Maybe a few 100 MB or so. Maybe a buffer is not flushed correctly at the end of a run.

Yes, I did. This 7 KB of the data bank is about the limit. If we go only 1 KB higher it seems that we save all data. In 
our specific case, this is the number of time bins (256 pixels with 7 time bins results in data loss, with 8 time bins it 
seems to be okay, data type is DWORD). 

Of course, a workaround for us is to save at least 8 time bins and throw 7 of them away later on. Nevertheless, since we 
are only in the commissioning phase now this is okay, I would just like to avoid data loss in the data taking phase of the 
experiment so knowing where the problem origins could help. 

I did another test with another FE running that produces a lot of data. The behavior is the same though. If the bank size 
is less than about 8 KB, the bank is not saved anymore. But probably this is anyway the expected behavior since it is a 
different FE that produces the data. 

So if it is coming from the buffer, is there something I could change to test or solve the problem?

Best, Ivo
ELOG V3.1.4-2e1708b5