> The problem is that eventually some of frontend closed with message
> :19:22:31.834 2021/12/02 [rootana,INFO] Client 'Sample Frontend38' on buffer
> 'SYSMSG' removed by cm_periodic_tasks because process pid 9789 does not exist
This messages means what it says. A client was registered with the SYSMSG buffer and this
client had pid 9789. At some point some other client (rootana, in this case) checked it and
process pid 9789 was no longer running. (it then proceeded to remove the registration).
There is 2 possibilities:
- simplest: your frontend has crashed. best to debug this by running it inside gdb, wait for
the crash.
- unlikely: reported pid is bogus, real pid of your frontend is different, the client
registration in SYSMSG is corrupted. this would indicate massive corruption of midas shared
memory buffers, not impossible if your frontend misbehaves and writes to random memory
addresses. ODB has protection against this (normally turned off, easy to enable, set ODB
"/experiment/protect odb" to yes), shared memory buffers do not have protection against this
(should be added?).
Do this. When you start your frontend, write down it's pid, when you see the crash message,
confirm pid number printed is the same. As additional test, run your frontend inside gdb,
after it crashes, you can print the stack trace, etc.
>
> in the meantime mserver loggging :
> mserver started interactively
> mserver will listen on TCP port 1175
> double free or corruption (!prev)
> double free or corruption (!prev)
> free(): invalid next size (normal)
> double free or corruption (!prev)
>
Are these "double free" messages coming from the mserver or from your frontend? (i.e. you run
them in different terminals, not all in the same terminal?).
If messages are coming from the mserver, this confirms possibility (1),
except that for frontends connected remotely, the pid is the pid of the mserver,
and what we see are crashes of mserver, not crashes of your frontend. These are much harder to
debug.
You will need to enable core dumps (ODB /Experiment/Enable core dumps set to "y"),
confirm that core dumps work (i.e. "killall -SEGV mserver", observe core files are created
in the directory where you started the mserver), reproduce the crash, run "gdb mserver
core.NNNN", run "bt" to print the stack trace, post the stack trace here (or email to me
directly).
>
> I can find some correlation between number of events/event size produced by
> frontend, cause its failed when its become big enough.
>
There is no limit on event size or event rate in midas, you should not see any crash
regardless of what you do. (there is a limit of event size, because an event has
to fit inside an event buffer and event buffer size is limited to 2 GB).
Obviously you hit a bug in mserver that makes it crash. Let's debug it.
One thing to try is set the write cache size to zero and see if your crash goes away. I see
some indication of something rotten in the event buffer code if write cache is enabled. This
is set in ODB "/Eq/XXX/Common/Write Cache Size", set it to zero. (beware recent confusion
where odb settings have no effect depending on value of "equipment_common_overwrite").
>
> frontend scheme is like this:
>
Best if you use the tmfe c++ frontend, event data handling is much simpler and we do not
have to debug the convoluted old code in mfe.c.
K.O.
>
> poll event time set to 0;
>
> poll_event{
> //if buffer not transferred return (continue cutting the main buffer)
> //read main buffer from hardware
> //buffer not transfered
> }
>
> read event{
> // cut the main buffer to subevents (cut one event from main buffer) return;
> //if (last subevent) {buffer transfered ;return}
> }
>
> What is strange to me that 2 frontends (1 per remote pc) causing this.
>
> Also, I'm executing one FEcode with -i # flag , put setting eventid in
> frontend_init , and using SYSTEM buffer for all.
>
> Is there something I'm missing?
> Thanks.
> A. |