23 Jun 2023, Stefan Ritt, Bug Report, deferred stop transition
|
> I'm in the same situation ?
Yepp. |
26 Jun 2023, Gennaro Tortone, Bug Report, deferred stop transition
|
> Deferred transitions were only implemented with a single instance of a program deferring the
> transition. To have several instances, MIDAS probably needs to be extended. Certainly this
> was never tested, so it's not a surprise that we get a segmentation fault.
|
26 Jun 2023, Stefan Ritt, Bug Report, deferred stop transition
|
> so, it seems that the issue is not related to different 'instances' of same frontend but
> that *at most* one frontend on whole MIDAS server can handle deferred transitions...
>
|
06 Oct 2023, Konstantin Olchanski, Info, default midas history switched to "FILE" and "PerVariable" history
|
We are very happy with the "FILE" implementation of MIDAS history and it is time
to make it the default for new experiments. This history driver works best if
"per variable" history is alos enabled. (SQL history already only works in "per-
|
23 Sep 2015, Peter Kravtsov, Forum, db_paste_node error in offline analyzer
|
I have a problem with using analyzer offline.
I'm trying to do it this way:
|
25 Sep 2015, Konstantin Olchanski, Forum, db_paste_node error in offline analyzer
|
> I have a problem with using analyzer offline.
...
> [Analyzer,INFO] Set run number 20 in ODB
|
13 Oct 2004, Konstantin Olchanski, Bug Report, db_paste: found string exceeding MAX_STRING_LENGTH
|
I am updating TWIST to the latest MIDAS and when I load a saved .odb file, I get
these messages. Their text ought to say where and what strings it does not like.
K.O.
|
13 Oct 2004, Stefan Ritt, Bug Report, db_paste: found string exceeding MAX_STRING_LENGTH
|
Can you attach
/twist/data_onl/current/run17548.odb
|
09 Dec 2003, Paul Knowles, , db_close_record non-local/non-return
|
Hi All,
|
12 Dec 2003, Stefan Ritt, , db_close_record non-local/non-return
|
Hi Paul,
sorry my late reply, I had to find some time for debugging your problem.
|
23 Feb 2014, William Page, Forum, db_check_record() for verifying structure of ODB subtree
|
Hi,
I have been trying to use db_check_record() in order to verify that a subtree in the ODB has the correct
|
10 Aug 2020, Ivo Schulthess, Bug Report, data missing in runXXXXXX.mid
|
Dear all
We just started our beam time at ILL and just found yesterday that for certain
|
10 Aug 2020, Stefan Ritt, Bug Report, data missing in runXXXXXX.mid
|
> Dear all
>
> We just started our beam time at ILL and just found yesterday that for certain
|
10 Aug 2020, Ivo Schulthess, Bug Report, data missing in runXXXXXX.mid
|
> > Dear all
> >
> > We just started our beam time at ILL and just found yesterday that for certain
|
10 Aug 2020, Stefan Ritt, Bug Report, data missing in runXXXXXX.mid
|
> Both set to -1. We only have one logging channel. If we run a sequence with a few runs and the
> same settings, sometimes data is in the .mid file and sometimes it is not.
|
10 Aug 2020, Ivo Schulthess, Bug Report, data missing in runXXXXXX.mid
|
> Then I'm running out of ideas. Things I would check:
>
> - Are the file sizes about the same?
|
10 Aug 2020, Stefan Ritt, Bug Report, data missing in runXXXXXX.mid
|
> So I did a quick check. The file size is about the same (322K and 329K). When I dump the .mid I don't see
> the banks. It only prints two lines with "------ Event# 0 ------" and "------ Event# 1 ------" whereas for
> the file with data I get the two banks with all the data. Our online analyzer also fails to see the banks.
|
10 Aug 2020, Ivo Schulthess, Bug Report, data missing in runXXXXXX.mid
|
> with "dump" I meant a true object dump like "hexdump -C run000001.mid". I produced a file with ADC0 and TDC0
> banks (that's the example from the distribution under exampels/experiments/frontend.cxx), and I get
>
|
10 Aug 2020, Stefan Ritt, Bug Report, data missing in runXXXXXX.mid
|
Have you tried longer files? Maybe a few 100 MB or so. Maybe a buffer is not flushed correctly at the end of a run. |
10 Aug 2020, Ivo Schulthess, Bug Report, data missing in runXXXXXX.mid
|
> Have you tried longer files? Maybe a few 100 MB or so. Maybe a buffer is not flushed correctly at the end of a run.
Yes, I did. This 7 KB of the data bank is about the limit. If we go only 1 KB higher it seems that we save all data. In
|