Back Midas Rome Roody Rootana
  Midas DAQ System  Not logged in ELOG logo
Entry  26 Apr 2023, Martin Mueller, Forum, Problem with running midas odbxx frontends on a remote machine using the -h option 
    Reply  27 Apr 2023, Konstantin Olchanski, Forum, Problem with running midas odbxx frontends on a remote machine using the -h option 
       Reply  28 Apr 2023, Martin Mueller, Forum, Problem with running midas odbxx frontends on a remote machine using the -h option 
          Reply  28 Apr 2023, Konstantin Olchanski, Forum, Problem with running midas odbxx frontends on a remote machine using the -h option 
             Reply  28 Apr 2023, Martin Mueller, Forum, Problem with running midas odbxx frontends on a remote machine using the -h option 
                Reply  28 Apr 2023, Konstantin Olchanski, Forum, Problem with running midas odbxx frontends on a remote machine using the -h option 
                   Reply  02 May 2023, Niklaus Berger, Forum, Problem with running midas odbxx frontends on a remote machine using the -h option 
                      Reply  02 May 2023, Niklaus Berger, Forum, Problem with running midas odbxx frontends on a remote machine using the -h option 
                      Reply  02 May 2023, Stefan Ritt, Forum, Problem with running midas odbxx frontends on a remote machine using the -h option 
Message ID: 2491     Entry time: 28 Apr 2023     In reply to: 2490     Reply to this: 2492
Author: Martin Mueller 
Topic: Forum 
Subject: Problem with running midas odbxx frontends on a remote machine using the -h option 
> > As i said we can easily reproduce this with midas/examples/odbxx/odbxx_test.cpp  (with cm_connect_experiment changed to "localhost")
> > [test,ERROR] [system.cxx:5104:recv_tcp2,ERROR] unexpected connection closure
> > [test,ERROR] [system.cxx:5158:ss_recv_net_command,ERROR] error receiving network command header, see messages
> > [test,ERROR] [midas.cxx:13900:rpc_call,ERROR] routine "db_copy_xml": error, ss_recv_net_command() status 411, program abort
> 
> ok, cool. looks like we crashed the mserver. either run mserver attached to gdb or enable mserver core dump, we need it's stack trace,
> the correct stack trace should be rooted in the handler for db_copy_xml.
> 
> but most likely odbxx is asking for more data than can be returned through the MIDAS RPC.
> 
> what is the ODB key passed to db_copy_xml() and how much data is in ODB at that key? (odbedit "du", right?).
> 
> K.O.

Ok. Maybe i have to make this more clear. ANY odbxx access of a remote odb reproduces this error for us on multiple machines. 
It does not matter how much data odbxx is asking for.

Something as simple as this reproduces the error, asking for a single integer:

int main() {
   cm_connect_experiment("localhost", "Mu3e", "test", NULL);
   midas::odb o = {
      {"Int32 Key", 42}
   };
   o.connect("/Test/Settings");
   cm_disconnect_experiment();
   return 1;
}

at the same time this runs fine:

int main() {
   cm_connect_experiment(NULL, NULL, "test", NULL);
   midas::odb o = {
      {"Int32 Key", 42}
   };
   o.connect("/Test/Settings");
   cm_disconnect_experiment();
   return 1;
}

in both cases mserver does not crash. I do not have a stack trace. There is also no error produced by mserver.

Last year we did not have these problems with the same midas frontends (For example in midas commit 9d2ef471 the code from above runs 
fine). I am trying to pinpoint the exact commit where this stopped working now. 
ELOG V3.1.4-2e1708b5