Back Midas Rome Roody Rootana
  Midas DAQ System, Page 74 of 142  Not logged in ELOG logo
Entry  04 Mar 2005, Stefan Ritt, Info, Real-Time 2005 Conference in Stockholm 
Dear Midas users,

may I kindly invite you present your work at the Real-Time 2005 Conference in
Stockholm, June 4-10. The conference deals with all kinds of real time
applications like DAQ, control systems etc. It is a small conference with no
paralles sessions, and with two interesting short courses. The deadline has been
prolonged until March 13, 2005. If you are interested, please register under

http://www.physto.se/RT2005/

Here is the official letter from the chairman:

=====================================================================
               14th IEEE-NPSS Real Time Conference 2005
                  Stockholm, Sweden, 4-10 June, 2005
              Conference web site: www.physto.se/RT2005/

**********************************************************************
*                                                                    *
*        ABSTRACT SUBMISSION PROLONGED! DEADLINE: March 13, 2005     *
*                                                                    *
**********************************************************************

Considering that the Real Time conference is a highly meritorious and
multidisciplinary conference with purely plenary sessions and that the
accepted papers may be submitted to a special issue of the IEEE
Transactions on Nuclear Science we would like to give more people the
opportunity to participate. Therefore we have organized the program so
that there is now more time for talks than at the RT2003 and we are
extending the abstract submission to March 13. We strongly encourage
you to participate!

Submit your abstract and a summary through the conference web site
"Abstract submission" link. Please, make sure that your colleagues know
about the conference and invite them.

I would also like to take this opportunity to announce the two short
courses we have organized for Sunday 5/6:

- "Gigabit Networking for Data Acquisition Systems - A practical
introduction"
  Artur Barczyk, CERN

- "System On Programmable Chip - A design tutorial"
  Marco Riccioli, Xilinx

Please find the abstracts and more information about the conference on
www.physto.se/RT2005/

Thank you if you have already submitted an abstract.

Richard Jacobsson
General Chairman, RT2005 Conference

Email: RT2005@cern.ch
Phone: +41-22-767 36 19
Fax:   +41-22-767 94 25
CERN Meyrin
1211 Geneva 23
Switzerland
Entry  07 May 2009, Konstantin Olchanski, Info, RPC.SHM gyration 
When using remote midas clients with mserver, you may have noticed the zero-size .RPC.SHM files 
these clients create in the directory where you run them. These files are associated with the semaphore 
created by the midas rpc layer (rpc_call) to synchronize rpc calls between multiple threads. This 
semaphore is always created, even for single-threaded midas applications. Also normally midas 
semaphore files are created in the midas experiment directory specified in exptab (same place as 
.ODB.SHM), but for remote clients, we do not know that location until we start making rpc calls, so the 
semaphore file is created in the current directory (and it is on a remote machine anyway, so this 
location may not be visible locally).

There are 2 problems with these semaphores:
1) in multiple experiments, we have observed the RPC.SHM semaphore stuck in a locked state, 
requiring manual cleanup (ipcrm -s xxx). So far, I have failed to duplicate this lockup using test 
programs and test experiments. The code appears to be coded correctly to automatically unlock the 
semaphore when the program exits or is killed.
2) RPC.SHM is created as a global shared semaphore so it synchronizes rpc calls not just for all threads 
inside one application, but across all threads in all applications (excessive locking - separate 
applications are connected to separate mservers and do not need this locking); but only for applications 
that run from the same current directory - RPC.SHM files in different directories are "connected" to 
different semaphores.

To try to fix this, I implemented "private semaphores" in system.c and made rpc_call() use them.

This introduced a major bug - a semaphore leak - quickly using up all sysv semaphores (see sysctl 
kernel.sem).

The code was now reverted back to using RPC.SHM as described above.

The "bad" svn revisions start with rev 4472, the problem is fixed in rev 4480.

If you use remote midas clients and have one of these bad revisions, either update midas.c to rev 4480 
or apply this patch to midas.c::rpc_call():
ss_mutex_create("", &_mutex_rpc);
should read
ss_mutex_create("RPC", &_mutex_rpc);

Apologies for any inconvenience caused by this problem
K.O.
    Reply  02 Jun 2009, Konstantin Olchanski, Info, RPC.SHM gyration 
> When using remote midas clients with mserver, you may have noticed the zero-size .RPC.SHM files 
> these clients create in the directory where you run them. These files are associated with the semaphore 
> created by the midas rpc layer (rpc_call) to synchronize rpc calls between multiple threads. This 
> semaphore is always created, even for single-threaded midas applications. Also normally midas 
> semaphore files are created in the midas experiment directory specified in exptab (same place as 
> .ODB.SHM), but for remote clients, we do not know that location until we start making rpc calls, so the 
> semaphore file is created in the current directory (and it is on a remote machine anyway, so this 
> location may not be visible locally).
> 
> There are 2 problems with these semaphores:

A 3rd problem surfaced - on SL5 Linux, the global limit is 128 or so semaphores and on at least one heavily used machine that hosts multiple 
experiments we simply run out of semaphores.

For "normal" semaphores, their number is fixed to about 5 per experiment (one for each shared memory buffer), but the number of RPC 
semaphores is not bounded by the number of experiments or even by the number of user accounts - they are created (and never deleted) for 
each experiment, for each user that connects to each experiment, for each subdirectory where the each user happened to try to start a 
program that connects to the each experiment. (to reuse the old children's rhyme).

Right now, MIDAS does not have an abstraction for "local multi-thread mutex" (i.e. pthread_mutex & co) and mostly uses global semaphores 
for this task (with interesting coding results, i.e. for multithreaded locking of ODB). Perhaps such an abstraction should be introduced?

K.O.
    Reply  04 Jun 2009, Stefan Ritt, Info, RPC.SHM gyration 
> Right now, MIDAS does not have an abstraction for "local multi-thread mutex" (i.e. pthread_mutex & co) and mostly uses global semaphores 
> for this task (with interesting coding results, i.e. for multithreaded locking of ODB). Perhaps such an abstraction should be introduced?

Yes. In the old days when I designed the inter-process communication (~1993), there was no such thing like pthread_mutex (only under Windows). 
Now it would be time to implement this thing, since it then will work under Posix and Windows (don't know about VxWorks). But that will at least 
allow multi-threaded client applications, which can safely call midas functions through the RPC layer. For local thread-safeness, all midas 
functions have to be checked an modified if necessary, which is a major work right now, but for remote clients it's rather simple.
    Reply  20 Nov 2009, Konstantin Olchanski, Info, RPC.SHM gyration 
> When using remote midas clients with mserver, you may have noticed the zero-size .RPC.SHM files 
> these clients create in the directory where you run them.

Well, RPC.SHM bites. Please reread the parent message for full details, but in the nutshell, it is a global
semaphore that permits only one midas rpc client to talk to midas at a time (it was intended for local
locking between threads inside one midas application).

I have about 10 remote midas frontends started by ssh all in the same directory, so they all share the same
.RPC.SHM semaphore and do not live through the night - die from ODB timeouts because of RPC semaphore contention.

In a test version of MIDAS, I disabled the RPC.SHM semaphore, and now my clients live through the night, very
good.

Long term, we should fix this by using application-local mutexes (i.e. pthread_mutex, also works on MacOS, do
Windows have pthreads yet?).

This will also cleanup some of the ODB locking, which currently confuses pid's, tid's etc and is completely
broken on MacOS because some of these values are 64-bit and do not fit into the 32-bit data fields in MIDAS
shared memories.

Short term, I can add a flag for enabling and disabling the RPC semaphore from the user application: enabled
by default, but user can disable it if they do not use threads.

Alternatively, I can disable it by default, then enable it automatically if multiple threads are detected or
if ss_thread_create() is called.

Could also make it an environment variable.

Any preferences?

K.O.
Entry  25 Jun 2022, Joseph McKenna, Bug Report, RPC timeout for manalyzer over network 

In ALPHA, I get RPC timeouts running a (reasonably heavy) analyzer on a remote machine (connected directly via a ~30 meter 10Gbe Ethernet cable) after ~5 minutes of running. If I run the analyser locally, I dont not see a timeout...

gdb trace:

#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x00007ffff5d35859 in __GI_abort () at abort.c:79
#2  0x00005555555a2a22 in rpc_call (routine_id=11111) at /home/alpha/packages/midas/src/midas.cxx:13866
#3  0x000055555562699d in bm_receive_event_rpc (buffer_handle=buffer_handle@entry=2, buf=buf@entry=0x0, buf_size=buf_size@entry=0x0, ppevent=ppevent@entry=0x0, pvec=pvec@entry=0x7fffffffd700,
    timeout_msec=timeout_msec@entry=100) at /home/alpha/packages/midas/src/midas.cxx:10510
#4  0x0000555555631082 in bm_receive_event_vec (buffer_handle=2, pvec=pvec@entry=0x7fffffffd700, timeout_msec=timeout_msec@entry=100) at /home/alpha/packages/midas/src/midas.cxx:10794
#5  0x0000555555673dbb in TMEventBuffer::ReceiveEvent (this=this@entry=0x555557388b30, e=e@entry=0x7fffffffd700, timeout_msec=timeout_msec@entry=100) at /home/alpha/packages/midas/src/tmfe.cxx:312
#6  0x0000555555607b56 in ReceiveEvent (b=0x555557388b30, e=0x7fffffffd6c0, timeout_msec=100) at /home/alpha/packages/midas/manalyzer/manalyzer.cxx:1411
#7  0x000055555560d8dc in ProcessMidasOnlineTmfe (args=..., progname=<optimized out>, hostname=<optimized out>, exptname=<optimized out>, bufname=<optimized out>, event_id=<optimized out>,
    trigger_mask=<optimized out>, sampling_type_string=<optimized out>, num_analyze=0, writer=<optimized out>, multithread=<optimized out>, profiler=<optimized out>,
    queue_interval_check=<optimized out>) at /home/alpha/packages/midas/manalyzer/manalyzer.cxx:1534
#8  0x000055555560f93b in manalyzer_main (argc=<optimized out>, argv=<optimized out>) at /usr/include/c++/9/bits/basic_string.h:2304
#9  0x00007ffff5d37083 in __libc_start_main (main=0x5555555b1130 <main(int, char**)>, argc=8, argv=0x7fffffffdda8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>,
    stack_end=0x7fffffffdd98) at ../csu/libc-start.c:308
#10 0x00005555555b184e in _start () at /usr/include/c++/9/bits/stl_vector.h:94

Any suggestions? Many thanks
    Reply  18 Jul 2022, Konstantin Olchanski, Bug Report, RPC timeout for manalyzer over network 
> In ALPHA, I get RPC timeouts running a (reasonably heavy) analyzer on a remote machine (connected directly via a ~30 meter 10Gbe Ethernet cable) after ~5 minutes of running. If I run the analyser locally, I dont not see a timeout...

there is a subtle bug in the mserver. under rare conditions, ss_suspend() will recurse in an unexpected way
and mserver will go to sleep waiting for data from a udp socket (that will never arrive, so sleep forever).
remote client will see it as an rpc timeout. in my tests (and in ALPHA-g at CERN, as reported by Joseph),
I see this rare condition to happen about every 5 minutes. in normal use, this is the first time we become
aware of this problem, the best I can tell this bug was in the mserver since day one.

commit https://bitbucket.org/tmidas/midas/commits/fbd06ad9d665b1341bd58b0e28d6625877f3cbd0
to develop and
to release/midas-2022-05

The stack trace that shows the mserver hang/crash (sleep() is the stand-in for the sleep-forever socket read).

(gdb) bt
#0  0x00007f922c53f9e0 in __nanosleep_nocancel () from /lib64/libc.so.6
#1  0x00007f922c53f894 in sleep () from /lib64/libc.so.6
#2  0x0000000000451922 in ss_suspend (millisec=millisec@entry=100, msg=msg@entry=1) at /home/agmini/packages/midas/src/system.cxx:4433
#3  0x0000000000411d53 in bm_wait_for_more_events_locked (pbuf_guard=..., pc=pc@entry=0x7f920639b93c, timeout_msec=timeout_msec@entry=100, 
    unlock_read_cache=unlock_read_cache@entry=1) at /home/agmini/packages/midas/src/midas.cxx:9429
#4  0x00000000004238c3 in bm_fill_read_cache_locked (timeout_msec=100, pbuf_guard=...) at /home/agmini/packages/midas/src/midas.cxx:9003
#5  bm_read_buffer (pbuf=pbuf@entry=0xdf8b50, buffer_handle=buffer_handle@entry=2, bufptr=bufptr@entry=0x0, buf=buf@entry=0x7f9203d75020, 
    buf_size=buf_size@entry=0x7f920639aa20, vecptr=vecptr@entry=0x0, timeout_msec=timeout_msec@entry=100, convert_flags=0, 
    dispatch=dispatch@entry=0) at /home/agmini/packages/midas/src/midas.cxx:10279
#6  0x0000000000424161 in bm_receive_event (buffer_handle=2, destination=0x7f9203d75020, buf_size=0x7f920639aa20, timeout_msec=100)
    at /home/agmini/packages/midas/src/midas.cxx:10649
#7  0x0000000000406ae4 in rpc_server_dispatch (index=11111, prpc_param=0x7ffcad70b7a0) at /home/agmini/packages/midas/progs/mserver.cxx:575
#8  0x000000000041ce9c in rpc_execute (sock=10, buffer=buffer@entry=0xe11570 "g+", convert_flags=0)
    at /home/agmini/packages/midas/src/midas.cxx:15003
#9  0x000000000041d7a5 in rpc_server_receive_rpc (idx=idx@entry=0, sa=0xde6ba0) at /home/agmini/packages/midas/src/midas.cxx:15958
#10 0x0000000000451455 in ss_suspend (millisec=millisec@entry=1000, msg=msg@entry=0) at /home/agmini/packages/midas/src/system.cxx:4575
#11 0x000000000041deb2 in rpc_server_loop () at /home/agmini/packages/midas/src/midas.cxx:15907
#12 0x0000000000405266 in main (argc=9, argv=<optimized out>) at /home/agmini/packages/midas/progs/mserver.cxx:390
(gdb) 

K.O.
Entry  06 Mar 2020, Lars Martin, Forum, RPC error 
I ported a bunch of frontends to C++ and now I'm occasionally getting this RPC 
error message:

http error: readyState: 4, HTTP status: 502 (Proxy Error), batch request: method: 
"db_get_values", params: [object Object], id: 1583456958869 method: "get_alarms", 
params: null, id: 1583456958869 method: "cm_msg_retrieve", params: [object 
Object], id: 1583456958869 method: "cm_msg_retrieve", params: [object Object], 
id: 1583456958869

I'm assuming I'm doing wrong something somewhere, but does this message contain 
information where to look? Does the ID mean something?
    Reply  08 Mar 2020, Konstantin Olchanski, Forum, RPC error 
I do not see this error, but there was one more report (they did not clearly say what http errors 
they see) https://bitbucket.org/tmidas/midas/issues/209/get-rid-of-mjsonrpc-dialogs-put-it-to-
the

To debug this, I need to know: what version of MIDAS, what version of what web browser, what 
computer is mhttpd running on? (so I can go look at the log files).

Also can you say more when you see these errors? Every time from every midas web page, or only 
some pages or only when you do something specific (push some button, etc?).

> I ported a bunch of frontends to C++ and now I'm occasionally getting this RPC 
> error message:
> 
> http error: readyState: 4, HTTP status: 502 (Proxy Error), batch request: method: 
> "db_get_values", params: [object Object], id: 1583456958869 method: "get_alarms", 
> params: null, id: 1583456958869 method: "cm_msg_retrieve", params: [object 
> Object], id: 1583456958869 method: "cm_msg_retrieve", params: [object Object], 
> id: 1583456958869
> 
> I'm assuming I'm doing wrong something somewhere, but does this message contain 
> information where to look? Does the ID mean something?

It is unlikely that this error has anything to do with the frontends: usually web page interaction 
goes through: web browser - network - apache httpd - localhost - mhttpd - midas odb.

http error 502 is very generic, does not tell us much about what happened, there may be more 
information in the httpd log files.

the json-rpc request "id" is generated by midas code in the web browser and it currently is a 
timestamp. it is not used for anything. but it is required by the json-rpc standard.

K.O.


K.O.
Entry  18 Feb 2020, Lukas Gerritzen, Bug Report, RPC Error: ACK or other control chars from "db_get_values" Screenshot_from_2020-02-18_10-46-22.png
Hi,
for some reason we occasionally get JSON errors in the browser when accessing MIDAS. It is then not possible to open a new window or tab, see attachment. The unexpected token is \0x6, so the acknowledge symbol.
If this happens, then all "alive sessions" keep being usable despite error messages, but show similar error messages:
>RPC Error
>json parser exception: SyntaxError: JSON.parse: bad control character in string literal at line 80 column 30 of the JSON data, method: "db_get_valus", params: [object Object], id: 1582020074098.

Do you have any idea why db_get_values yields ACK or other control characters?

Thanks
    Reply  18 Feb 2020, Stefan Ritt, Bug Report, RPC Error: ACK or other control chars from "db_get_values" 
You are the first one reporting this error, so it must be due to your values in the ODB. Can you track it down to specific ODB contents? If so, can you post it so that I can reproduce your error?

Stefan
    Reply  20 Feb 2020, Konstantin Olchanski, Bug Report, RPC Error: ACK or other control chars from "db_get_values" 
> The unexpected token is \0x6
> RPC Error json parser exception: SyntaxError: JSON.parse: bad control character in string literal at line 80 column 30 of the JSON data, method: "db_get_valus", params: [object Object], id: 1582020074098.

Yes, there is a problem.

Traditionally, midas strings in ODB have no restriction on the content (I think even the '\0x0' char is permitted).

But web browser javascript strings are supposed to be valid unicode (UTF-16, if I read this right: https://tc39.es/ecma262/#sec-ecmascript-language-types-string-type).

The collision between the two happens when ODB values are json-encoded by midas, then json-decoded by the web browser.

The midas json encoder (mjson.h, mjson.cxx) encodes ODB strings according to JSON rules, but does not ensure that the result is valid UTF-8. (valid UTF-8 is not required, if I read the specs correctly http://www.ecma-
international.org/publications/files/ECMA-ST/ECMA-404.pdf and https://www.json.org/json-en.html)

The web browser json decoder requires valid UTF-8 and throws exceptions if it does not like something. Different browsers it slightly differently, so we have an error handler for this in the mjsonrpc results processor.

What does this mean in practice?

Now that MIDAS is very web oriented, MIDAS strings must be web browser friendly, too:

a) all ODB key names (subdirectory names, link names, etc) must be UTF-8 unicode, and this has been enforced by ODB for some time now.
b) all ODB string values must be valid UTF-8 unicode. This is not enforced right now.

Historically, it was okey to use ODB TID_STRING to store arbitrary binary data, but now, I think, we must deprecate this,
at least for any ODB entries that could be returned to a web browser (which means all of them, after we implement a fully
html+javascript odb editor). For storing binary data, arrays of TID_CHAR, TID_DWORD & co are probably a better match.

The MIDAS and ROOTANA json decoders (the same mjson.h, mjson.cxx) do not care about UTF-8, so ODB dumps
in JSON format are not affected by any of this. (But I am not sure about the JSON decoder in ROOT).

Bottom line:

I think db_validate() should check for invalid UTF-8 in ODB key names and in TID_STRING values
and at least warn the user. (I am not sure if invalid UTF-8 can be fixed automatically). db_create()
should reject key names that are not valid UTF-8 (it already does this, I think). db_set_value(TID_STRING) should
probably reject invalid UTF-8 strings, this needs to be discussed some more.

https://bitbucket.org/tmidas/midas/issues/215/everything-in-odb-must-be-valid-utf-8

K.O.
Entry  08 May 2022, Stefan Ritt, Info, RO_STOPPED with triggered events 
We had issues in one of our experiment that people used RO_STOPPED in the 
equipment list together with triggered events (EQ_USER). If events are sent when 
a run is stopped, this leads to many unexpected results, so I added a check in 
the mfe.cxx code which prevents RO_STOPPED (or RO_ALWAYS which includes 
RO_STOPPED) together with EQ_TRIGGERED, EQ_INTERRUPT, EQ_MULTITHREAD and EQ_USER 
type of events.

I got now complaints that some old front-end are not running any more since they 
do use RO_ALWAYS together with triggered events. Can the author of these frontend 
please tell me the rationale why this is needed, then I can maybe add a better 
fix for that.

Stefan
    Reply  08 May 2022, Konstantin Olchanski, Info, RO_STOPPED with triggered events 
> If events are sent when a run is stopped, this leads to many unexpected results

I think we need to understand what these unexpected results are.

Naively thinking, of would expect midas to not care 
    Reply  08 May 2022, Konstantin Olchanski, Info, RO_STOPPED with triggered events 
> some old front-end are not running any more since they do use RO_ALWAYS together with 
triggered events.

I confirm, if you have mfe.c frontends that have RO_ALWAYS, after you update MIDAS, 
some of these frontends will fail to start.
https://bitbucket.org/tmidas/midas/commits/1961af0d657e4f76ab9db17f9b70c0c492172b6d

tmfe c++ frontends do not have this restriction but by default only read data when run 
is active (per-equipment fEqConfReadOnlyWhenRunning default is true).

K.O.
    Reply  16 May 2022, Konstantin Olchanski, Info, RO_STOPPED with triggered events 
> > some old front-end are not running any more since they do use RO_ALWAYS together with 
> triggered events.
> 
> I confirm, if you have mfe.c frontends that have RO_ALWAYS, after you update MIDAS, 
> some of these frontends will fail to start.
> https://bitbucket.org/tmidas/midas/commits/1961af0d657e4f76ab9db17f9b70c0c492172b6d
> 
> tmfe c++ frontends do not have this restriction but by default only read data when run 
> is active (per-equipment fEqConfReadOnlyWhenRunning default is true).

As of commit 
https://bitbucket.org/tmidas/midas/commits/28d9c96bd6d4f65346ebcd6a04492ea764c90823 mfe.c 
frontends will no longer fail to start. an error will still be issued "Equipment \"%s\" 
contains RO_STOPPED or RO_ALWAYS. This can lead to undesired side-effect and should be 
removed."

BTW 1:

Some of our old frontends use EQ_MULTITHREAD to implement multithreaded periodic equipments. 
They do not generate any events when there is no run (some of them do not generate any 
events at all). Now they will start printing this error message, for no reason. (no we will 
not be rewriting them justy to get rid of this message. life is too short).

BTW 2:

the c++ tmfe frontend does not have any protections against these "undersired side-effects".

What are these undesired side effects and should we add protection against them?

K.O.
    Reply  17 May 2022, Stefan Ritt, Info, RO_STOPPED with triggered events 
> > > some old front-end are not running any more since they do use RO_ALWAYS together with 
> > triggered events.
> > 
> > I confirm, if you have mfe.c frontends that have RO_ALWAYS, after you update MIDAS, 
> > some of these frontends will fail to start.
> > https://bitbucket.org/tmidas/midas/commits/1961af0d657e4f76ab9db17f9b70c0c492172b6d
> > 
> > tmfe c++ frontends do not have this restriction but by default only read data when run 
> > is active (per-equipment fEqConfReadOnlyWhenRunning default is true).
> 
> As of commit 
> https://bitbucket.org/tmidas/midas/commits/28d9c96bd6d4f65346ebcd6a04492ea764c90823 mfe.c 
> frontends will no longer fail to start. an error will still be issued "Equipment \"%s\" 
> contains RO_STOPPED or RO_ALWAYS. This can lead to undesired side-effect and should be 
> removed."
> 
> BTW 1:
> 
> Some of our old frontends use EQ_MULTITHREAD to implement multithreaded periodic equipments. 
> They do not generate any events when there is no run (some of them do not generate any 
> events at all). Now they will start printing this error message, for no reason. (no we will 
> not be rewriting them justy to get rid of this message. life is too short).
> 
> BTW 2:
> 
> the c++ tmfe frontend does not have any protections against these "undersired side-effects".
> 
> What are these undesired side effects and should we add protection against them?
> 
> K.O.

The undesired side-effects are the following: The logger tries to collect all events at the end of 
the run by emptying the SYSTEM buffer. If events keep coming after the run is stopped, this loop in 
the logger might be an endless loop, crashing the whole experiment in the end. 

Another issue (and actually the reason for this change) is the funciton receive_trigger_event() in 
mfe.cxx which will get confused if events are still coming in after a run has been stopped and 
actually enters an infinite loop.

Combining EQ_MULTITHREAD with EQ_PERIODIC or EQ_SLOW is a wrong parameter combination as written in 
the documentation. If one wants to have multi-threaded slow control events, one has to use the 
DF_MULTITHREAD flag in the DEVICE_DRIVER structure.

Having triggered events being sent to the system after a run has been stopped I would consider 
simply wrong. Why should we ever use a run start/stop if events are always flowing? Adding 
protections in all places for this case is certainly much more work than just changing one flag for 
frontends which produce this error message now for a wrong parameter combination.
Entry  07 Aug 2019, Paolo Baesso, Bug Report, ROOTANA bug? 
Hi,

I posted on the ROOTANA elog but there seems to be little activity there...

Could someone confirm if this is a bug?
https://midas.triumf.ca/elog/Rootana/14

Another user replied that they are encountering the same issue, so I think it is unlikely it is just our installation.

While ROOTANA is unusable for us, I tried to use the example Frontend and Analyzer (under the Experiment source folder). The analyzer does not seem to do much though. A root file is produced but nothing is placed into it. Is that normal?

Any help would be welcome.
    Reply  07 Aug 2019, Thomas Lindner, Bug Report, ROOTANA bug? 
Hi Paolo,

Sorry for the slow response.  We were discussing this with Konstantin yesterday.  He is aware of the problem now and will be working on a solution soon.

In the short term I found that it works if you just comment out the offending line:

indnerlt:rootana lindner$ git diff libMidasInterface/TMidasOnline.cxx
diff --git a/libMidasInterface/TMidasOnline.cxx b/libMidasInterface/TMidasOnline.cxx
index 92eb3e9..67da613 100644
--- a/libMidasInterface/TMidasOnline.cxx
+++ b/libMidasInterface/TMidasOnline.cxx
@@ -191,7 +191,7 @@ bool TMidasOnline::sleep(int mdelay)
   #ifdef CH_IPC
   ss_suspend_set_dispatch(CH_IPC, 0, NULL);
   #else
-  ss_suspend_set_dispatch_ipc(NULL);
+  //  ss_suspend_set_dispatch_ipc(NULL);
   #endif
  int status = ss_suspend(mdelay, 0);
   if (status == SS_SUCCESS)

This compiles and at least runs for me; so maybe that is helpful for you.  But Konstantin will provide a longer term solution.



> Hi,
> 
> I posted on the ROOTANA elog but there seems to be little activity there...
> 
> Could someone confirm if this is a bug?
> https://midas.triumf.ca/elog/Rootana/14
> 
> Another user replied that they are encountering the same issue, so I think it is unlikely it is just our installation.
> 
> While ROOTANA is unusable for us, I tried to use the example Frontend and Analyzer (under the Experiment source folder). The analyzer does not seem to do much though. A root file is produced but nothing is placed into it. Is that normal?
> 
> Any help would be welcome.
    Reply  08 Aug 2019, Lauren Manton, Bug Report, ROOTANA bug? 
Hi,

Thank you, commenting out the line worked and we can now compile the code. However, when we try to run ana.exe or anaDisplay.exe, we get the following errors:

Error in <TCling::RegisterModule>: cannot find dictionary module TMainDisplayWindowDict_rdict.pcm
Error in <TCling::RegisterModule>: cannot find dictionary module TRootanaDisplayDict_rdict.pcm
Error in <TCling::RegisterModule>: cannot find dictionary module TFancyHistogramCanvasDict_rdict.pcm
 

We see that the files are in /rootana/obj but we cannot find a way to point the compiler to them.

Could you please advise how to proceed,

Many thanks

> Hi Paolo,
> 
> Sorry for the slow response.  We were discussing this with Konstantin yesterday.  He is aware of the problem now and will be working on a solution soon.
> 
> In the short term I found that it works if you just comment out the offending line:
> 
> indnerlt:rootana lindner$ git diff libMidasInterface/TMidasOnline.cxx
> diff --git a/libMidasInterface/TMidasOnline.cxx b/libMidasInterface/TMidasOnline.cxx
> index 92eb3e9..67da613 100644
> --- a/libMidasInterface/TMidasOnline.cxx
> +++ b/libMidasInterface/TMidasOnline.cxx
> @@ -191,7 +191,7 @@ bool TMidasOnline::sleep(int mdelay)
>    #ifdef CH_IPC
>    ss_suspend_set_dispatch(CH_IPC, 0, NULL);
>    #else
> -  ss_suspend_set_dispatch_ipc(NULL);
> +  //  ss_suspend_set_dispatch_ipc(NULL);
>    #endif
>   int status = ss_suspend(mdelay, 0);
>    if (status == SS_SUCCESS)
> 
> This compiles and at least runs for me; so maybe that is helpful for you.  But Konstantin will provide a longer term solution.
> 
> 
> 
> > Hi,
> > 
> > I posted on the ROOTANA elog but there seems to be little activity there...
> > 
> > Could someone confirm if this is a bug?
> > https://midas.triumf.ca/elog/Rootana/14
> > 
> > Another user replied that they are encountering the same issue, so I think it is unlikely it is just our installation.
> > 
> > While ROOTANA is unusable for us, I tried to use the example Frontend and Analyzer (under the Experiment source folder). The analyzer does not seem to do much though. A root file is produced but nothing is placed into it. Is that normal?
> > 
> > Any help would be welcome.
ELOG V3.1.4-2e1708b5