Back Midas Rome Roody Rootana
  Midas DAQ System  Not logged in ELOG logo
Entry  07 May 2009, Konstantin Olchanski, Info, RPC.SHM gyration 
    Reply  02 Jun 2009, Konstantin Olchanski, Info, RPC.SHM gyration 
       Reply  04 Jun 2009, Stefan Ritt, Info, RPC.SHM gyration 
    Reply  20 Nov 2009, Konstantin Olchanski, Info, RPC.SHM gyration 
Message ID: 673     Entry time: 20 Nov 2009     In reply to: 570
Author: Konstantin Olchanski 
Topic: Info 
Subject: RPC.SHM gyration 
> When using remote midas clients with mserver, you may have noticed the zero-size .RPC.SHM files 
> these clients create in the directory where you run them.

Well, RPC.SHM bites. Please reread the parent message for full details, but in the nutshell, it is a global
semaphore that permits only one midas rpc client to talk to midas at a time (it was intended for local
locking between threads inside one midas application).

I have about 10 remote midas frontends started by ssh all in the same directory, so they all share the same
.RPC.SHM semaphore and do not live through the night - die from ODB timeouts because of RPC semaphore contention.

In a test version of MIDAS, I disabled the RPC.SHM semaphore, and now my clients live through the night, very
good.

Long term, we should fix this by using application-local mutexes (i.e. pthread_mutex, also works on MacOS, do
Windows have pthreads yet?).

This will also cleanup some of the ODB locking, which currently confuses pid's, tid's etc and is completely
broken on MacOS because some of these values are 64-bit and do not fit into the 32-bit data fields in MIDAS
shared memories.

Short term, I can add a flag for enabling and disabling the RPC semaphore from the user application: enabled
by default, but user can disable it if they do not use threads.

Alternatively, I can disable it by default, then enable it automatically if multiple threads are detected or
if ss_thread_create() is called.

Could also make it an environment variable.

Any preferences?

K.O.
ELOG V3.1.4-2e1708b5