ID |
Date |
Author |
Topic |
Subject |
2864
|
08 Oct 2024 |
Mark Grimes | Bug Report | Difficulty running MIDAS on Rocky 9.4 | We run Midas with no problems on Rocky 9.4, although not in a virtual machine. We're very close to the head of `develop`.
I'm fairly sure I've seen an error like this before. I didn't pay it much attention because it was transitory and I was doing something weird at the time - probably
stepping through with a debugger and hit a timeout. It was definitely about an ODB semaphore but I can't recall if it was about a recursive call.
Basically I think Rocky 9.4 is a red herring. Do you have another crashed copy of mserver/mhttpd running somewhere and stuck in limbo? If it's a virtual machine
are you sharing the shared memory location with the host, and running another midas on there?
> > We're trying to install the SuperCDMS version of MIDAS on a Rocky 9.4 Virtual
> > Machine and are getting a persistent error when we run mserver.
> >
> > [mserver,ERROR] [odb.cxx:2498:db_lock_database,ERROR] cannot lock ODB semaphore,
> > timeout 10000 ms, exiting...
> > db_lock_database: Detected recursive call to db_{lock,unlock}_database() while
> > already inside db_{lock,unlock}_database(). Maybe this is a call from a signal
> > handler. Cannot continue, aborting...
> > Aborted (core dumped)
>
> This is super very bad. Since you have a core dump, please post the stack trace here (or email it to me).
>
> I probably cannot debug your private version of midas and I will recommend that you install and run vanilla midas
> mserver (just while we debug this problem).
>
> Let's look at the core dump stack trace first, but likely we see a problem with System-V semaphores and hopefully it
> is not some breakage due to Red Hat bogosity or due to something specific to running on a virtual machine.
>
> If indeed this is Linux-kernel level breakage of System-V semaphores, solution would be to start using Posix
> semaphores, something I wanted to do for a long time. We already switched MIDAS shared memory from System-V to Posix
> shared memory.
>
> If we are lucky it is just one more crasher bug in ODB. Let's see that core dump stack trace.
>
> K.O. |
2865
|
08 Oct 2024 |
Amy Roberts | Bug Report | Difficulty running MIDAS on Rocky 9.4 | > > We're trying to install the SuperCDMS version of MIDAS on a Rocky 9.4 Virtual
> > Machine and are getting a persistent error when we run mserver. As far as I
> > know there are minimal changes between this and the MIDAS branch, but Ben Smith
> > may have more to say on this.
>
> For reference, "the SuperCDMS version of MIDAS" is just a fork that no longer has any meaningful differences vs the main MIDAS repo, but we only pull updates infrequently after testing a bunch. We last pulled from the develop branch in November 2023. But that should be irrelevant here as semaphore code hasn't been touched for a very long time.
>
> We're running Alma 9.4 on a machine at TRIUMF and the same version of midas works fine there (Amy, you may already have access to scdms-zeus). I believe Alma and Rocky should be basically identical for this.
>
> So the questions are:
> * Have you tried other midas programs, or only mserver? E.g. did odbedit and mhttpd work?
> * If other programs work, have you been running them all as the same user? In particular, if you ran one program as root and another as an unprivileged user, then you will likely get odd permissions issues.
> * What do you see if you run `ls -l /dev/shm` and `ls -l ~/packages/SuperCDMS_DAQ/MidasDAQ/online/.*SHM`? (Or wherever your online dir is for the 2nd one).
> * Did you follow the full instructions for recovering from a corrupt ODB? https://daq00.triumf.ca/MidasWiki/index.php/FAQ#How_to_recover_from_a_corrupted_ODB In particular the bit about running odbinit with the --cleanup flag?
Here's what happens when I try to run odbedit:
[lekhraj@sdfcdmsdaq setup]$ odbedit
[ODBEdit,ERROR] [odb.cxx:2052:db_open_database,ERROR] Removed ODB client 'ODBEdit', index 0 because process pid 481823 does not exists
[ODBEdit,INFO] Removed open record flag from "/Experiment/Security/RPC hosts/Allowed hosts"
[ODBEdit,INFO] Removed exclusive access mode from "/Experiment/Security/RPC hosts/Allowed hosts"
[ODBEdit,INFO] Corrected 1 ODB entries
[ODBEdit,INFO] Deleted entry '/System/Clients/481823' for client 'ODBEdit' because it is not connected to ODB
[ODBEdit,INFO] Client 'ODBEdit' on buffer 'SYSMSG' removed by bm_open_buffer because process pid 481823 does not exist
[ODBEdit,ERROR] [odb.cxx:2498:db_lock_database,ERROR] cannot lock ODB semaphore, timeout 10000 ms, exiting...
[ODBEdit,ERROR] [midas.cxx:2205:cm_check_connect,ERROR] cm_disconnect_experiment not called at end of program
db_lock_database: Detected recursive call to db_{lock,unlock}_database() while already inside db_{lock,unlock}_database(). Maybe this is a call from a signal handler. Cannot continue, aborting...
Aborted (core dumped)
[lekhraj@sdfcdmsdaq setup]$
And mhttpd:
[lekhraj@sdfcdmsdaq setup]$ mhttpd
[mhttpd,ERROR] [odb.cxx:2052:db_open_database,ERROR] Removed ODB client 'ODBEdit', index 0 because process pid 601054 does not exists
[mhttpd,INFO] Removed open record flag from "/Experiment/Security/RPC hosts/Allowed hosts"
[mhttpd,INFO] Removed exclusive access mode from "/Experiment/Security/RPC hosts/Allowed hosts"
[mhttpd,INFO] Corrected 1 ODB entries
[mhttpd,INFO] Deleted entry '/System/Clients/601054' for client 'ODBEdit' because it is not connected to ODB
[mhttpd,INFO] Client 'ODBEdit' on buffer 'SYSMSG' removed by bm_open_buffer because process pid 601054 does not exist
[mhttpd,INFO] ODB subtree /Runinfo corrected successfully
Password protection is off
Hostlist off, connections from anywhere will be accepted
Listening on "http://localhost:8080", passwords OFF, hostlist OFF
Listening on "http://[::1]:8080", passwords OFF, hostlist OFF
bm_lock_buffer: Lock buffer "SYSMSG" is taking longer than 1 second!
[mhttpd,ERROR] [odb.cxx:2498:db_lock_database,ERROR] cannot lock ODB semaphore, timeout 10000 ms, exiting...
[mhttpd,ERROR] [midas.cxx:2205:cm_check_connect,ERROR] cm_disconnect_experiment not called at end of program
db_lock_database: Detected recursive call to db_{lock,unlock}_database() while already inside db_{lock,unlock}_database(). Maybe this is a call from a signal handler. Cannot continue, aborting...
Aborted (core dumped)
[lekhraj@sdfcdmsdaq setup]$
We have been running everything as a single user, the user who cloned the repositories and owns the directories.
We did follow the corrupted-ODB cleanup instructions.
[lekhraj@sdfcdmsdaq setup]$ ls -lh /dev/shm
total 1.3M
-rw------- 1 lekhraj dm 1.2M Oct 8 14:13 17468_test_ODB__sdf_home_l_lekhraj_packages_SuperCDMS_DAQ_MidasDAQ_online_
-rw------- 1 lekhraj dm 114K Oct 7 14:06 17468_test_SYSMSG__sdf_home_l_lekhraj_packages_SuperCDMS_DAQ_MidasDAQ_online_
[lekhraj@sdfcdmsdaq setup]$ ls -lh ~/packages/SuperCDMS_DAQ/MidasDAQ/online/.*SHM
-rw-r--r-- 1 lekhraj dm 0 Oct 3 08:46 /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/MidasDAQ/online/.ALARM.SHM
-rw-r--r-- 1 lekhraj dm 0 Oct 3 08:46 /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/MidasDAQ/online/.ELOG.SHM
-rw-r--r-- 1 lekhraj dm 0 Oct 3 08:46 /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/MidasDAQ/online/.HISTORY.SHM
-rw-r--r-- 1 lekhraj dm 0 Oct 3 08:46 /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/MidasDAQ/online/.LAZY.SHM
-rw-r--r-- 1 lekhraj dm 0 Oct 3 08:46 /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/MidasDAQ/online/.MSG.SHM
-rw-r--r-- 1 lekhraj dm 1.2M Oct 8 14:12 /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/MidasDAQ/online/.ODB.SHM
-rw-r--r-- 1 lekhraj dm 0 Oct 3 08:46 /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/MidasDAQ/online/.SYSMSG.SHM
-rw-r--r-- 1 lekhraj dm 0 Oct 3 08:46 /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/MidasDAQ/online/.SYSTEM.SHM |
2866
|
08 Oct 2024 |
Amy Roberts | Bug Report | Difficulty running MIDAS on Rocky 9.4 | > > We're trying to install the SuperCDMS version of MIDAS on a Rocky 9.4 Virtual
> > Machine and are getting a persistent error when we run mserver.
> >
> > [mserver,ERROR] [odb.cxx:2498:db_lock_database,ERROR] cannot lock ODB semaphore,
> > timeout 10000 ms, exiting...
> > db_lock_database: Detected recursive call to db_{lock,unlock}_database() while
> > already inside db_{lock,unlock}_database(). Maybe this is a call from a signal
> > handler. Cannot continue, aborting...
> > Aborted (core dumped)
>
> This is super very bad. Since you have a core dump, please post the stack trace here (or email it to me).
>
> I probably cannot debug your private version of midas and I will recommend that you install and run vanilla midas
> mserver (just while we debug this problem).
>
> Let's look at the core dump stack trace first, but likely we see a problem with System-V semaphores and hopefully it
> is not some breakage due to Red Hat bogosity or due to something specific to running on a virtual machine.
>
> If indeed this is Linux-kernel level breakage of System-V semaphores, solution would be to start using Posix
> semaphores, something I wanted to do for a long time. We already switched MIDAS shared memory from System-V to Posix
> shared memory.
>
> If we are lucky it is just one more crasher bug in ODB. Let's see that core dump stack trace.
>
> K.O.
I've uploaded the current core dump at: https://gitlab.com/det-lab/coredumps#.
This was done using the "CDMS" version of MIDAS, I'll compile the current MIDAS repository just to be sure we're seeing
the same error and report back here! |
2867
|
08 Oct 2024 |
Konstantin Olchanski | Bug Report | Difficulty running MIDAS on Rocky 9.4 | > I've uploaded the current core dump at: https://gitlab.com/det-lab/coredumps#.
I cannot read the core dump without the corresponding executable (and likely all it's shared libraries).
It is best if you run gdb and extract the stack traces on your end.
In case you are not familiar with gdb:
gdb mserver core # start gdb
bt # stack trace of crashed thread
info thr # get list of threads
thr 1
bt
thr 2
bt
# etc, get stack trace of each thread, there should not be too many of them
K.O. |
2868
|
08 Oct 2024 |
Konstantin Olchanski | Bug Report | Difficulty running MIDAS on Rocky 9.4 | I read these error messages. There is no ODB corruption. ODB semaphore is locked and all midas programs will fail, they will timeout trying to get the lock, report the timeout, then it looks like a bug was introduced where instead of hard exit or abort() they attempt a clean shutdown which crashes from a recursive call in db_lock_database(). Amy's
core dump should confirm this.
K.O.
> > > We're trying to install the SuperCDMS version of MIDAS on a Rocky 9.4 Virtual
> > > Machine and are getting a persistent error when we run mserver. As far as I
> > > know there are minimal changes between this and the MIDAS branch, but Ben Smith
> > > may have more to say on this.
> >
> > For reference, "the SuperCDMS version of MIDAS" is just a fork that no longer has any meaningful differences vs the main MIDAS repo, but we only pull updates infrequently after testing a bunch. We last pulled from the develop branch in November 2023. But that should be irrelevant here as semaphore code hasn't been touched for a very long time.
> >
> > We're running Alma 9.4 on a machine at TRIUMF and the same version of midas works fine there (Amy, you may already have access to scdms-zeus). I believe Alma and Rocky should be basically identical for this.
> >
> > So the questions are:
> > * Have you tried other midas programs, or only mserver? E.g. did odbedit and mhttpd work?
> > * If other programs work, have you been running them all as the same user? In particular, if you ran one program as root and another as an unprivileged user, then you will likely get odd permissions issues.
> > * What do you see if you run `ls -l /dev/shm` and `ls -l ~/packages/SuperCDMS_DAQ/MidasDAQ/online/.*SHM`? (Or wherever your online dir is for the 2nd one).
> > * Did you follow the full instructions for recovering from a corrupt ODB? https://daq00.triumf.ca/MidasWiki/index.php/FAQ#How_to_recover_from_a_corrupted_ODB In particular the bit about running odbinit with the --cleanup flag?
>
> Here's what happens when I try to run odbedit:
>
> [lekhraj@sdfcdmsdaq setup]$ odbedit
> [ODBEdit,ERROR] [odb.cxx:2052:db_open_database,ERROR] Removed ODB client 'ODBEdit', index 0 because process pid 481823 does not exists
> [ODBEdit,INFO] Removed open record flag from "/Experiment/Security/RPC hosts/Allowed hosts"
> [ODBEdit,INFO] Removed exclusive access mode from "/Experiment/Security/RPC hosts/Allowed hosts"
> [ODBEdit,INFO] Corrected 1 ODB entries
> [ODBEdit,INFO] Deleted entry '/System/Clients/481823' for client 'ODBEdit' because it is not connected to ODB
> [ODBEdit,INFO] Client 'ODBEdit' on buffer 'SYSMSG' removed by bm_open_buffer because process pid 481823 does not exist
> [ODBEdit,ERROR] [odb.cxx:2498:db_lock_database,ERROR] cannot lock ODB semaphore, timeout 10000 ms, exiting...
> [ODBEdit,ERROR] [midas.cxx:2205:cm_check_connect,ERROR] cm_disconnect_experiment not called at end of program
> db_lock_database: Detected recursive call to db_{lock,unlock}_database() while already inside db_{lock,unlock}_database(). Maybe this is a call from a signal handler. Cannot continue, aborting...
> Aborted (core dumped)
> [lekhraj@sdfcdmsdaq setup]$
>
> And mhttpd:
>
> [lekhraj@sdfcdmsdaq setup]$ mhttpd
> [mhttpd,ERROR] [odb.cxx:2052:db_open_database,ERROR] Removed ODB client 'ODBEdit', index 0 because process pid 601054 does not exists
> [mhttpd,INFO] Removed open record flag from "/Experiment/Security/RPC hosts/Allowed hosts"
> [mhttpd,INFO] Removed exclusive access mode from "/Experiment/Security/RPC hosts/Allowed hosts"
> [mhttpd,INFO] Corrected 1 ODB entries
> [mhttpd,INFO] Deleted entry '/System/Clients/601054' for client 'ODBEdit' because it is not connected to ODB
> [mhttpd,INFO] Client 'ODBEdit' on buffer 'SYSMSG' removed by bm_open_buffer because process pid 601054 does not exist
> [mhttpd,INFO] ODB subtree /Runinfo corrected successfully
> Password protection is off
> Hostlist off, connections from anywhere will be accepted
> Listening on "http://localhost:8080", passwords OFF, hostlist OFF
> Listening on "http://[::1]:8080", passwords OFF, hostlist OFF
> bm_lock_buffer: Lock buffer "SYSMSG" is taking longer than 1 second!
> [mhttpd,ERROR] [odb.cxx:2498:db_lock_database,ERROR] cannot lock ODB semaphore, timeout 10000 ms, exiting...
> [mhttpd,ERROR] [midas.cxx:2205:cm_check_connect,ERROR] cm_disconnect_experiment not called at end of program
> db_lock_database: Detected recursive call to db_{lock,unlock}_database() while already inside db_{lock,unlock}_database(). Maybe this is a call from a signal handler. Cannot continue, aborting...
> Aborted (core dumped)
> [lekhraj@sdfcdmsdaq setup]$
>
> We have been running everything as a single user, the user who cloned the repositories and owns the directories.
>
> We did follow the corrupted-ODB cleanup instructions.
>
> [lekhraj@sdfcdmsdaq setup]$ ls -lh /dev/shm
> total 1.3M
> -rw------- 1 lekhraj dm 1.2M Oct 8 14:13 17468_test_ODB__sdf_home_l_lekhraj_packages_SuperCDMS_DAQ_MidasDAQ_online_
> -rw------- 1 lekhraj dm 114K Oct 7 14:06 17468_test_SYSMSG__sdf_home_l_lekhraj_packages_SuperCDMS_DAQ_MidasDAQ_online_
>
> [lekhraj@sdfcdmsdaq setup]$ ls -lh ~/packages/SuperCDMS_DAQ/MidasDAQ/online/.*SHM
> -rw-r--r-- 1 lekhraj dm 0 Oct 3 08:46 /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/MidasDAQ/online/.ALARM.SHM
> -rw-r--r-- 1 lekhraj dm 0 Oct 3 08:46 /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/MidasDAQ/online/.ELOG.SHM
> -rw-r--r-- 1 lekhraj dm 0 Oct 3 08:46 /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/MidasDAQ/online/.HISTORY.SHM
> -rw-r--r-- 1 lekhraj dm 0 Oct 3 08:46 /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/MidasDAQ/online/.LAZY.SHM
> -rw-r--r-- 1 lekhraj dm 0 Oct 3 08:46 /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/MidasDAQ/online/.MSG.SHM
> -rw-r--r-- 1 lekhraj dm 1.2M Oct 8 14:12 /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/MidasDAQ/online/.ODB.SHM
> -rw-r--r-- 1 lekhraj dm 0 Oct 3 08:46 /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/MidasDAQ/online/.SYSMSG.SHM
> -rw-r--r-- 1 lekhraj dm 0 Oct 3 08:46 /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/MidasDAQ/online/.SYSTEM.SHM |
2869
|
08 Oct 2024 |
Konstantin Olchanski | Bug Report | Difficulty running MIDAS on Rocky 9.4 | > Basically I think Rocky 9.4 is a red herring.
This is what likely happened:
- some program crashed while holding the ODB lock semaphore (by ctrl-C at the wrong time or by kill -KILL at the wring time)
- semaphore is locked with a flag "unlock if this program stops"
- this is supposed to ensure ODB lock semaphore never gets stuck in the locked state
- (there is no code in MIDAS to unlock the ODB lock semaphore without locking it first)
- we have observed a malfunction in the Linux kernel, where this automatic unlock does not happen.
- it is rare and so far cannot be reproduced. you can find more about it by searching this forum.
- I think it is a bug in the System-V semaphore code or in the Linux "program stop" code (a path where they fail to call the semaphore unlock handler, and who knows what
other handlers).
- System-V semaphores are very obsolete, replaced by POSIX semaphores.
- POSIX semaphores do not have the "unlock if this program stops" magic, the user (MIDAS) is responsible with detecting that the program who locked the semaphore is gone
and and with taking corrective action, i.e. release the lock, automatically.
I do not know if this problem with System-V semaphores is in the generic Linux kernel, or if it is specific
to the Red Hat kernels (they are known to have many patches, deviating quite far from vanilla kernels).
I do not know if this problem is somehow sensitive to virtual machines.
So yes/no, Red Hat derived linux on a virtual machine could be where this problem happens more often that elsewhere.
K.O. |
2870
|
08 Oct 2024 |
Konstantin Olchanski | Bug Report | Difficulty running MIDAS on Rocky 9.4 | > I read these error messages. There is no ODB corruption. ODB semaphore is locked and all midas programs will fail...
Recovery from this is:
- stop all midas programs (actually they should have all crashed by now)
- identify the ODB semaphore with: ipcs -s -t
- remove the ODB semaphore with: ipcrm sem <semid>
- where <semid> is from the first column of ipcs
- keep deleting semaphores until odbedit works.
- if you delete extra midas sempahores, odbedit will recreate them
- if you delete non-midas semaphores, oh, well...
Little bit better steps for this recovery may be written up by Suzannah in this forum
or in the midas wiki... good luck finding them.
K.O. |
2873
|
10 Oct 2024 |
Amy Roberts | Bug Report | Difficulty running MIDAS on Rocky 9.4 | > > I've uploaded the current core dump at: https://gitlab.com/det-lab/coredumps#.
>
> I cannot read the core dump without the corresponding executable (and likely all it's shared libraries).
>
> It is best if you run gdb and extract the stack traces on your end.
>
> In case you are not familiar with gdb:
>
> gdb mserver core # start gdb
> bt # stack trace of crashed thread
> info thr # get list of threads
> thr 1
> bt
> thr 2
> bt
> # etc, get stack trace of each thread, there should not be too many of them
>
> K.O.
Hi Konstantin, thanks for the instructions. I do appear to be missing some debug symbols, but the output
looks potentially useful:
[lekhraj@sdfcdmsdaq ~]$ gdb mserver
core.mserver.17468.b174bb74f2bb44f9a0905e78ec6b2677.601715.1728422354000000
GNU gdb (GDB) Rocky Linux 10.2-11.1.el9_3
...
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from mserver...
[New LWP 601715]
warning: Section `.reg-xstate/601715' in core file too small.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `mserver'.
Program terminated with signal SIGABRT, Aborted.
warning: Section `.reg-xstate/601715' in core file too small.
#0 0x00007fbdeaca154c in __pthread_kill_implementation () from /lib64/libc.so.6
Missing separate debuginfos, use: dnf debuginfo-install glibc-2.34-83.el9.12.x86_64 libgcc-11.4.1-
3.el9.x86_64 libstdc++-11.4.1-2.1.el9.x86_64 libzstd-1.5.1-2.el9.x86_64 mysql-libs-8.0.36-1.el9_3.x86_64
openssl-libs-3.0.7-25.el9_3.x86_64 zlib-1.2.11-40.el9.x86_64
(gdb)
(gdb) bt
#0 0x00007fbdeaca154c in __pthread_kill_implementation () from /lib64/libc.so.6
#1 0x00007fbdeac54d06 in raise () from /lib64/libc.so.6
#2 0x00007fbdeac287f3 in abort () from /lib64/libc.so.6
#3 0x0000000000430ee4 in db_lock_database (hDB=hDB@entry=1)
at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:2473
#4 0x0000000000437e9c in db_find_key (subhKey=0x7ffcc536d348, key_name=0x4687a8 "/Logger/Message file date
format",
hKey=0, hDB=1) at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4099
#5 db_find_key (hDB=1, hKey=0, key_name=0x4687a8 "/Logger/Message file date format",
subhKey=0x7ffcc536d348)
at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4075
#6 0x0000000000448297 in db_get_value_string (hdb=1, hKeyRoot=hKeyRoot@entry=0,
key_name=key_name@entry=0x4687a8 "/Logger/Message file date format", index=index@entry=0,
s=s@entry=0x7ffcc536d470, create=create@entry=1, create_string_length=0)
at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:13950
#7 0x000000000040a690 in cm_msg_get_logfile (fac=<optimized out>, t=<optimized out>,
filename=0x7ffcc536d690,
linkname=0x7ffcc536d6b0, linktarget=0x7ffcc536d6d0)
at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:573
#8 0x000000000041a307 in cm_msg_log (message_type=1, facility=0x46db0e "midas",
message=0x7e4290 "[mserver,ERROR] [odb.cxx:2498:db_lock_database,ERROR] cannot lock ODB semaphore,
timeout 10000 ms, exiting...") at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:685
#9 0x0000000000421fcd in cm_msg_flush_buffer () at /usr/include/c++/11/bits/basic_string.h:194
#10 0x00007fbdeac574dd in __run_exit_handlers () from /lib64/libc.so.6
#11 0x00007fbdeac57620 in exit () from /lib64/libc.so.6
#12 0x0000000000430f7a in db_lock_database (hDB=hDB@entry=1)
at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:2499
#13 0x0000000000437e9c in db_find_key (subhKey=0x7ffcc536da04, key_name=0x476a21 "/Alarms/Alarms", hKey=0,
hDB=1)
at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4099
#14 db_find_key (hDB=1, hKey=hKey@entry=0, key_name=key_name@entry=0x476a21 "/Alarms/Alarms",
subhKey=subhKey@entry=0x7ffcc536da04) at
/sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4075
#15 0x0000000000455fd2 in al_check () at
/sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/alarm.cxx:614
--Type <RET> for more, q to quit, c to continue without paging--
#16 0x000000000041ff85 in cm_periodic_tasks ()
at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:5596
#17 0x00000000004235c5 in cm_yield (millisec=millisec@entry=1000)
at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:5676
#18 0x00000000004065c2 in main (argc=<optimized out>, argv=0x7ffcc536e628)
at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/progs/mserver.cxx:295
(gdb) info thr
Id Target Id Frame
* 1 Thread 0x7fbdec0b1740 (LWP 601715) 0x00007fbdeaca154c in __pthread_kill_implementation () from
/lib64/libc.so.6
(gdb) thr 1
[Switching to thread 1 (Thread 0x7fbdec0b1740 (LWP 601715))]
#0 0x00007fbdeaca154c in __pthread_kill_implementation () from /lib64/libc.so.6
(gdb) bt
#0 0x00007fbdeaca154c in __pthread_kill_implementation () from /lib64/libc.so.6
#1 0x00007fbdeac54d06 in raise () from /lib64/libc.so.6
#2 0x00007fbdeac287f3 in abort () from /lib64/libc.so.6
#3 0x0000000000430ee4 in db_lock_database (hDB=hDB@entry=1)
at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:2473
#4 0x0000000000437e9c in db_find_key (subhKey=0x7ffcc536d348, key_name=0x4687a8 "/Logger/Message file date
format",
hKey=0, hDB=1) at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4099
#5 db_find_key (hDB=1, hKey=0, key_name=0x4687a8 "/Logger/Message file date format",
subhKey=0x7ffcc536d348)
at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4075
#6 0x0000000000448297 in db_get_value_string (hdb=1, hKeyRoot=hKeyRoot@entry=0,
key_name=key_name@entry=0x4687a8 "/Logger/Message file date format", index=index@entry=0,
s=s@entry=0x7ffcc536d470, create=create@entry=1, create_string_length=0)
at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:13950
#7 0x000000000040a690 in cm_msg_get_logfile (fac=<optimized out>, t=<optimized out>,
filename=0x7ffcc536d690,
linkname=0x7ffcc536d6b0, linktarget=0x7ffcc536d6d0)
at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:573
#8 0x000000000041a307 in cm_msg_log (message_type=1, facility=0x46db0e "midas",
message=0x7e4290 "[mserver,ERROR] [odb.cxx:2498:db_lock_database,ERROR] cannot lock ODB semaphore,
timeout 10000 ms, exiting...") at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:685
#9 0x0000000000421fcd in cm_msg_flush_buffer () at /usr/include/c++/11/bits/basic_string.h:194
#10 0x00007fbdeac574dd in __run_exit_handlers () from /lib64/libc.so.6
#11 0x00007fbdeac57620 in exit () from /lib64/libc.so.6
#12 0x0000000000430f7a in db_lock_database (hDB=hDB@entry=1)
at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:2499
#13 0x0000000000437e9c in db_find_key (subhKey=0x7ffcc536da04, key_name=0x476a21 "/Alarms/Alarms", hKey=0,
hDB=1)
at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4099
#14 db_find_key (hDB=1, hKey=hKey@entry=0, key_name=key_name@entry=0x476a21 "/Alarms/Alarms",
subhKey=subhKey@entry=0x7ffcc536da04) at
/sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4075
#15 0x0000000000455fd2 in al_check () at
/sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/alarm.cxx:614
#16 0x000000000041ff85 in cm_periodic_tasks ()
at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:5596
#17 0x00000000004235c5 in cm_yield (millisec=millisec@entry=1000)
at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:5676
#18 0x00000000004065c2 in main (argc=<optimized out>, argv=0x7ffcc536e628)
at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/progs/mserver.cxx:295
(gdb) |
2876
|
13 Oct 2024 |
Konstantin Olchanski | Bug Report | Difficulty running MIDAS on Rocky 9.4 | Thank you for the stack trace, I fixed the buglet that cause midas programs to crash twice,
once on failure to lock ODB, then call exit() -> atexit() handlers -> cm_check_connect() -> crash on ODB lock
failure is the cm_msg() codes.
Replaced exit(1) with abort(). Could have used kill(getpid(),SIGKILL) to avoid making a core dump, but what the
heck...
Of course this does nothing to the original bug where ODB was locked and nobody will ever unlock it (reboot will
unlock it!).
commit bdd1d7fdc093b5a8d54a1b8467002bb3cac3ac11
K.O.
> > > I've uploaded the current core dump at: https://gitlab.com/det-lab/coredumps#.
> >
> > I cannot read the core dump without the corresponding executable (and likely all it's shared libraries).
> >
> > It is best if you run gdb and extract the stack traces on your end.
> >
> > In case you are not familiar with gdb:
> >
> > gdb mserver core # start gdb
> > bt # stack trace of crashed thread
> > info thr # get list of threads
> > thr 1
> > bt
> > thr 2
> > bt
> > # etc, get stack trace of each thread, there should not be too many of them
> >
> > K.O.
>
> Hi Konstantin, thanks for the instructions. I do appear to be missing some debug symbols, but the output
> looks potentially useful:
>
> [lekhraj@sdfcdmsdaq ~]$ gdb mserver
> core.mserver.17468.b174bb74f2bb44f9a0905e78ec6b2677.601715.1728422354000000
> GNU gdb (GDB) Rocky Linux 10.2-11.1.el9_3
> ...
> For help, type "help".
> Type "apropos word" to search for commands related to "word"...
> Reading symbols from mserver...
> [New LWP 601715]
>
> warning: Section `.reg-xstate/601715' in core file too small.
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> Core was generated by `mserver'.
> Program terminated with signal SIGABRT, Aborted.
>
> warning: Section `.reg-xstate/601715' in core file too small.
> #0 0x00007fbdeaca154c in __pthread_kill_implementation () from /lib64/libc.so.6
> Missing separate debuginfos, use: dnf debuginfo-install glibc-2.34-83.el9.12.x86_64 libgcc-11.4.1-
> 3.el9.x86_64 libstdc++-11.4.1-2.1.el9.x86_64 libzstd-1.5.1-2.el9.x86_64 mysql-libs-8.0.36-1.el9_3.x86_64
> openssl-libs-3.0.7-25.el9_3.x86_64 zlib-1.2.11-40.el9.x86_64
> (gdb)
> (gdb) bt
> #0 0x00007fbdeaca154c in __pthread_kill_implementation () from /lib64/libc.so.6
> #1 0x00007fbdeac54d06 in raise () from /lib64/libc.so.6
> #2 0x00007fbdeac287f3 in abort () from /lib64/libc.so.6
> #3 0x0000000000430ee4 in db_lock_database (hDB=hDB@entry=1)
> at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:2473
> #4 0x0000000000437e9c in db_find_key (subhKey=0x7ffcc536d348, key_name=0x4687a8 "/Logger/Message file date
> format",
> hKey=0, hDB=1) at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4099
> #5 db_find_key (hDB=1, hKey=0, key_name=0x4687a8 "/Logger/Message file date format",
> subhKey=0x7ffcc536d348)
> at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4075
> #6 0x0000000000448297 in db_get_value_string (hdb=1, hKeyRoot=hKeyRoot@entry=0,
> key_name=key_name@entry=0x4687a8 "/Logger/Message file date format", index=index@entry=0,
> s=s@entry=0x7ffcc536d470, create=create@entry=1, create_string_length=0)
> at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:13950
> #7 0x000000000040a690 in cm_msg_get_logfile (fac=<optimized out>, t=<optimized out>,
> filename=0x7ffcc536d690,
> linkname=0x7ffcc536d6b0, linktarget=0x7ffcc536d6d0)
> at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:573
> #8 0x000000000041a307 in cm_msg_log (message_type=1, facility=0x46db0e "midas",
> message=0x7e4290 "[mserver,ERROR] [odb.cxx:2498:db_lock_database,ERROR] cannot lock ODB semaphore,
> timeout 10000 ms, exiting...") at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:685
> #9 0x0000000000421fcd in cm_msg_flush_buffer () at /usr/include/c++/11/bits/basic_string.h:194
> #10 0x00007fbdeac574dd in __run_exit_handlers () from /lib64/libc.so.6
> #11 0x00007fbdeac57620 in exit () from /lib64/libc.so.6
> #12 0x0000000000430f7a in db_lock_database (hDB=hDB@entry=1)
> at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:2499
> #13 0x0000000000437e9c in db_find_key (subhKey=0x7ffcc536da04, key_name=0x476a21 "/Alarms/Alarms", hKey=0,
> hDB=1)
> at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4099
> #14 db_find_key (hDB=1, hKey=hKey@entry=0, key_name=key_name@entry=0x476a21 "/Alarms/Alarms",
> subhKey=subhKey@entry=0x7ffcc536da04) at
> /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4075
> #15 0x0000000000455fd2 in al_check () at
> /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/alarm.cxx:614
> --Type <RET> for more, q to quit, c to continue without paging--
> #16 0x000000000041ff85 in cm_periodic_tasks ()
> at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:5596
> #17 0x00000000004235c5 in cm_yield (millisec=millisec@entry=1000)
> at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:5676
> #18 0x00000000004065c2 in main (argc=<optimized out>, argv=0x7ffcc536e628)
> at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/progs/mserver.cxx:295
> (gdb) info thr
> Id Target Id Frame
> * 1 Thread 0x7fbdec0b1740 (LWP 601715) 0x00007fbdeaca154c in __pthread_kill_implementation () from
> /lib64/libc.so.6
> (gdb) thr 1
> [Switching to thread 1 (Thread 0x7fbdec0b1740 (LWP 601715))]
> #0 0x00007fbdeaca154c in __pthread_kill_implementation () from /lib64/libc.so.6
> (gdb) bt
> #0 0x00007fbdeaca154c in __pthread_kill_implementation () from /lib64/libc.so.6
> #1 0x00007fbdeac54d06 in raise () from /lib64/libc.so.6
> #2 0x00007fbdeac287f3 in abort () from /lib64/libc.so.6
> #3 0x0000000000430ee4 in db_lock_database (hDB=hDB@entry=1)
> at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:2473
> #4 0x0000000000437e9c in db_find_key (subhKey=0x7ffcc536d348, key_name=0x4687a8 "/Logger/Message file date
> format",
> hKey=0, hDB=1) at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4099
> #5 db_find_key (hDB=1, hKey=0, key_name=0x4687a8 "/Logger/Message file date format",
> subhKey=0x7ffcc536d348)
> at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4075
> #6 0x0000000000448297 in db_get_value_string (hdb=1, hKeyRoot=hKeyRoot@entry=0,
> key_name=key_name@entry=0x4687a8 "/Logger/Message file date format", index=index@entry=0,
> s=s@entry=0x7ffcc536d470, create=create@entry=1, create_string_length=0)
> at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:13950
> #7 0x000000000040a690 in cm_msg_get_logfile (fac=<optimized out>, t=<optimized out>,
> filename=0x7ffcc536d690,
> linkname=0x7ffcc536d6b0, linktarget=0x7ffcc536d6d0)
> at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:573
> #8 0x000000000041a307 in cm_msg_log (message_type=1, facility=0x46db0e "midas",
> message=0x7e4290 "[mserver,ERROR] [odb.cxx:2498:db_lock_database,ERROR] cannot lock ODB semaphore,
> timeout 10000 ms, exiting...") at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:685
> #9 0x0000000000421fcd in cm_msg_flush_buffer () at /usr/include/c++/11/bits/basic_string.h:194
> #10 0x00007fbdeac574dd in __run_exit_handlers () from /lib64/libc.so.6
> #11 0x00007fbdeac57620 in exit () from /lib64/libc.so.6
> #12 0x0000000000430f7a in db_lock_database (hDB=hDB@entry=1)
> at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:2499
> #13 0x0000000000437e9c in db_find_key (subhKey=0x7ffcc536da04, key_name=0x476a21 "/Alarms/Alarms", hKey=0,
> hDB=1)
> at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4099
> #14 db_find_key (hDB=1, hKey=hKey@entry=0, key_name=key_name@entry=0x476a21 "/Alarms/Alarms",
> subhKey=subhKey@entry=0x7ffcc536da04) at
> /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4075
> #15 0x0000000000455fd2 in al_check () at
> /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/alarm.cxx:614
> #16 0x000000000041ff85 in cm_periodic_tasks ()
> at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:5596
> #17 0x00000000004235c5 in cm_yield (millisec=millisec@entry=1000)
> at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:5676
> #18 0x00000000004065c2 in main (argc=<optimized out>, argv=0x7ffcc536e628)
> at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/progs/mserver.cxx:295
> (gdb) |
2877
|
16 Oct 2024 |
Amy Roberts | Bug Report | Difficulty running MIDAS on Rocky 9.4 | > Thank you for the stack trace, I fixed the buglet that cause midas programs to crash twice,
> once on failure to lock ODB, then call exit() -> atexit() handlers -> cm_check_connect() -> crash on ODB lock
> failure is the cm_msg() codes.
>
> Replaced exit(1) with abort(). Could have used kill(getpid(),SIGKILL) to avoid making a core dump, but what the
> heck...
>
> Of course this does nothing to the original bug where ODB was locked and nobody will ever unlock it (reboot will
> unlock it!).
>
> commit bdd1d7fdc093b5a8d54a1b8467002bb3cac3ac11
>
> K.O.
>
>
> > > > I've uploaded the current core dump at: https://gitlab.com/det-lab/coredumps#.
> > >
> > > I cannot read the core dump without the corresponding executable (and likely all it's shared libraries).
> > >
> > > It is best if you run gdb and extract the stack traces on your end.
> > >
> > > In case you are not familiar with gdb:
> > >
> > > gdb mserver core # start gdb
> > > bt # stack trace of crashed thread
> > > info thr # get list of threads
> > > thr 1
> > > bt
> > > thr 2
> > > bt
> > > # etc, get stack trace of each thread, there should not be too many of them
> > >
> > > K.O.
> >
> > Hi Konstantin, thanks for the instructions. I do appear to be missing some debug symbols, but the output
> > looks potentially useful:
> >
> > [lekhraj@sdfcdmsdaq ~]$ gdb mserver
> > core.mserver.17468.b174bb74f2bb44f9a0905e78ec6b2677.601715.1728422354000000
> > GNU gdb (GDB) Rocky Linux 10.2-11.1.el9_3
> > ...
> > For help, type "help".
> > Type "apropos word" to search for commands related to "word"...
> > Reading symbols from mserver...
> > [New LWP 601715]
> >
> > warning: Section `.reg-xstate/601715' in core file too small.
> > [Thread debugging using libthread_db enabled]
> > Using host libthread_db library "/lib64/libthread_db.so.1".
> > Core was generated by `mserver'.
> > Program terminated with signal SIGABRT, Aborted.
> >
> > warning: Section `.reg-xstate/601715' in core file too small.
> > #0 0x00007fbdeaca154c in __pthread_kill_implementation () from /lib64/libc.so.6
> > Missing separate debuginfos, use: dnf debuginfo-install glibc-2.34-83.el9.12.x86_64 libgcc-11.4.1-
> > 3.el9.x86_64 libstdc++-11.4.1-2.1.el9.x86_64 libzstd-1.5.1-2.el9.x86_64 mysql-libs-8.0.36-1.el9_3.x86_64
> > openssl-libs-3.0.7-25.el9_3.x86_64 zlib-1.2.11-40.el9.x86_64
> > (gdb)
> > (gdb) bt
> > #0 0x00007fbdeaca154c in __pthread_kill_implementation () from /lib64/libc.so.6
> > #1 0x00007fbdeac54d06 in raise () from /lib64/libc.so.6
> > #2 0x00007fbdeac287f3 in abort () from /lib64/libc.so.6
> > #3 0x0000000000430ee4 in db_lock_database (hDB=hDB@entry=1)
> > at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:2473
> > #4 0x0000000000437e9c in db_find_key (subhKey=0x7ffcc536d348, key_name=0x4687a8 "/Logger/Message file date
> > format",
> > hKey=0, hDB=1) at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4099
> > #5 db_find_key (hDB=1, hKey=0, key_name=0x4687a8 "/Logger/Message file date format",
> > subhKey=0x7ffcc536d348)
> > at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4075
> > #6 0x0000000000448297 in db_get_value_string (hdb=1, hKeyRoot=hKeyRoot@entry=0,
> > key_name=key_name@entry=0x4687a8 "/Logger/Message file date format", index=index@entry=0,
> > s=s@entry=0x7ffcc536d470, create=create@entry=1, create_string_length=0)
> > at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:13950
> > #7 0x000000000040a690 in cm_msg_get_logfile (fac=<optimized out>, t=<optimized out>,
> > filename=0x7ffcc536d690,
> > linkname=0x7ffcc536d6b0, linktarget=0x7ffcc536d6d0)
> > at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:573
> > #8 0x000000000041a307 in cm_msg_log (message_type=1, facility=0x46db0e "midas",
> > message=0x7e4290 "[mserver,ERROR] [odb.cxx:2498:db_lock_database,ERROR] cannot lock ODB semaphore,
> > timeout 10000 ms, exiting...") at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:685
> > #9 0x0000000000421fcd in cm_msg_flush_buffer () at /usr/include/c++/11/bits/basic_string.h:194
> > #10 0x00007fbdeac574dd in __run_exit_handlers () from /lib64/libc.so.6
> > #11 0x00007fbdeac57620 in exit () from /lib64/libc.so.6
> > #12 0x0000000000430f7a in db_lock_database (hDB=hDB@entry=1)
> > at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:2499
> > #13 0x0000000000437e9c in db_find_key (subhKey=0x7ffcc536da04, key_name=0x476a21 "/Alarms/Alarms", hKey=0,
> > hDB=1)
> > at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4099
> > #14 db_find_key (hDB=1, hKey=hKey@entry=0, key_name=key_name@entry=0x476a21 "/Alarms/Alarms",
> > subhKey=subhKey@entry=0x7ffcc536da04) at
> > /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4075
> > #15 0x0000000000455fd2 in al_check () at
> > /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/alarm.cxx:614
> > --Type <RET> for more, q to quit, c to continue without paging--
> > #16 0x000000000041ff85 in cm_periodic_tasks ()
> > at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:5596
> > #17 0x00000000004235c5 in cm_yield (millisec=millisec@entry=1000)
> > at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:5676
> > #18 0x00000000004065c2 in main (argc=<optimized out>, argv=0x7ffcc536e628)
> > at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/progs/mserver.cxx:295
> > (gdb) info thr
> > Id Target Id Frame
> > * 1 Thread 0x7fbdec0b1740 (LWP 601715) 0x00007fbdeaca154c in __pthread_kill_implementation () from
> > /lib64/libc.so.6
> > (gdb) thr 1
> > [Switching to thread 1 (Thread 0x7fbdec0b1740 (LWP 601715))]
> > #0 0x00007fbdeaca154c in __pthread_kill_implementation () from /lib64/libc.so.6
> > (gdb) bt
> > #0 0x00007fbdeaca154c in __pthread_kill_implementation () from /lib64/libc.so.6
> > #1 0x00007fbdeac54d06 in raise () from /lib64/libc.so.6
> > #2 0x00007fbdeac287f3 in abort () from /lib64/libc.so.6
> > #3 0x0000000000430ee4 in db_lock_database (hDB=hDB@entry=1)
> > at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:2473
> > #4 0x0000000000437e9c in db_find_key (subhKey=0x7ffcc536d348, key_name=0x4687a8 "/Logger/Message file date
> > format",
> > hKey=0, hDB=1) at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4099
> > #5 db_find_key (hDB=1, hKey=0, key_name=0x4687a8 "/Logger/Message file date format",
> > subhKey=0x7ffcc536d348)
> > at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4075
> > #6 0x0000000000448297 in db_get_value_string (hdb=1, hKeyRoot=hKeyRoot@entry=0,
> > key_name=key_name@entry=0x4687a8 "/Logger/Message file date format", index=index@entry=0,
> > s=s@entry=0x7ffcc536d470, create=create@entry=1, create_string_length=0)
> > at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:13950
> > #7 0x000000000040a690 in cm_msg_get_logfile (fac=<optimized out>, t=<optimized out>,
> > filename=0x7ffcc536d690,
> > linkname=0x7ffcc536d6b0, linktarget=0x7ffcc536d6d0)
> > at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:573
> > #8 0x000000000041a307 in cm_msg_log (message_type=1, facility=0x46db0e "midas",
> > message=0x7e4290 "[mserver,ERROR] [odb.cxx:2498:db_lock_database,ERROR] cannot lock ODB semaphore,
> > timeout 10000 ms, exiting...") at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:685
> > #9 0x0000000000421fcd in cm_msg_flush_buffer () at /usr/include/c++/11/bits/basic_string.h:194
> > #10 0x00007fbdeac574dd in __run_exit_handlers () from /lib64/libc.so.6
> > #11 0x00007fbdeac57620 in exit () from /lib64/libc.so.6
> > #12 0x0000000000430f7a in db_lock_database (hDB=hDB@entry=1)
> > at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:2499
> > #13 0x0000000000437e9c in db_find_key (subhKey=0x7ffcc536da04, key_name=0x476a21 "/Alarms/Alarms", hKey=0,
> > hDB=1)
> > at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4099
> > #14 db_find_key (hDB=1, hKey=hKey@entry=0, key_name=key_name@entry=0x476a21 "/Alarms/Alarms",
> > subhKey=subhKey@entry=0x7ffcc536da04) at
> > /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/odb.cxx:4075
> > #15 0x0000000000455fd2 in al_check () at
> > /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/alarm.cxx:614
> > #16 0x000000000041ff85 in cm_periodic_tasks ()
> > at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:5596
> > #17 0x00000000004235c5 in cm_yield (millisec=millisec@entry=1000)
> > at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/src/midas.cxx:5676
> > #18 0x00000000004065c2 in main (argc=<optimized out>, argv=0x7ffcc536e628)
> > at /sdf/home/l/lekhraj/packages/SuperCDMS_DAQ/midas_fork/progs/mserver.cxx:295
> > (gdb)
I checked out the modified version of Midas and recompiled, and am still getting a similar error when I try to run
odbedit:
[aroberts@sdfcdmsdaq midas]$ odbedit
[ODBEdit,ERROR] [odb.cxx:2043:db_open_database,ERROR] Removed ODB client 'ODBEdit', index 0 because process pid
1615051 does not exists
[ODBEdit,INFO] Removed open record flag from "/Experiment/Security/RPC hosts/Allowed hosts"
[ODBEdit,INFO] Removed exclusive access mode from "/Experiment/Security/RPC hosts/Allowed hosts"
[ODBEdit,INFO] Corrected 1 ODB entries
[ODBEdit,INFO] Deleted entry '/System/Clients/1615051' for client 'ODBEdit' because it is not connected to ODB
[ODBEdit,INFO] Client 'ODBEdit' on buffer 'SYSMSG' removed by bm_open_buffer because process pid 1615051 does not
exist
[ODBEdit,ERROR] [odb.cxx:2489:db_lock_database,ERROR] cannot lock ODB semaphore, timeout 10000 ms, aborting...
Aborted (core dumped)
I'm not sure what's causing the call to lock the database, all I'm doing is typing "odbedit" in the command prompt.
I should add that I followed the instructions for unlocking the ODB, but when I call "odbedit" this error still
appears:
[aroberts@sdfcdmsdaq midas]$ ipcs -s -t
------ Semaphore Operation/Change Times --------
semid owner last-op last-changed
4 aroberts Wed Oct 16 11:44:10 2024 Wed Oct 16 11:44:00 2024
[aroberts@sdfcdmsdaq midas]$ ipcrm sem 4
resource(s) deleted
[aroberts@sdfcdmsdaq midas]$ odbedit
[ODBEdit,ERROR] [odb.cxx:2043:db_open_database,ERROR] Removed ODB client 'ODBEdit', index 0 because process pid
1617050 does not exists
[ODBEdit,INFO] Removed open record flag from "/Experiment/Security/RPC hosts/Allowed hosts"
[ODBEdit,INFO] Removed exclusive access mode from "/Experiment/Security/RPC hosts/Allowed hosts"
[ODBEdit,INFO] Corrected 1 ODB entries
[ODBEdit,INFO] Deleted entry '/System/Clients/1617050' for client 'ODBEdit' because it is not connected to ODB
[ODBEdit,INFO] Client 'ODBEdit' on buffer 'SYSMSG' removed by bm_open_buffer because process pid 1617050 does not
exist
[ODBEdit,ERROR] [odb.cxx:2489:db_lock_database,ERROR] cannot lock ODB semaphore, timeout 10000 ms, aborting...
Aborted (core dumped) |
2879
|
18 Oct 2024 |
Konstantin Olchanski | Bug Report | Difficulty running MIDAS on Rocky 9.4 | > [aroberts@sdfcdmsdaq midas]$ odbedit
> [ODBEdit,ERROR] [odb.cxx:2043:db_open_database,ERROR] Removed ODB client 'ODBEdit', index 0 because process pid
> 1615051 does not exists
> [ODBEdit,INFO] Removed open record flag from "/Experiment/Security/RPC hosts/Allowed hosts"
> [ODBEdit,INFO] Removed exclusive access mode from "/Experiment/Security/RPC hosts/Allowed hosts"
> [ODBEdit,INFO] Corrected 1 ODB entries
> [ODBEdit,INFO] Deleted entry '/System/Clients/1615051' for client 'ODBEdit' because it is not connected to ODB
> [ODBEdit,INFO] Client 'ODBEdit' on buffer 'SYSMSG' removed by bm_open_buffer because process pid 1615051 does not
> exist
so far, so good, we connected to ODB (lock was not stuck), cleared out client "odbedit" with pid 1615051 that crashed
without properly disconnecting. ODB semaphore is working correctly.
> [ODBEdit,ERROR] [odb.cxx:2489:db_lock_database,ERROR] cannot lock ODB semaphore, timeout 10000 ms, aborting...
> Aborted (core dumped)
suddenly, an ODB semaphore timeout...
can you post the stack trace from this core dump? I am pretty sure it will be boring, but just in case...
K.O. |
2880
|
18 Oct 2024 |
Konstantin Olchanski | Bug Report | Difficulty running MIDAS on Rocky 9.4 | > suddenly, an ODB semaphore timeout...
I am wondering if something bizarre is going on, like the system clock going backwards. I heard of things like that
happening in virtual environments.
https://stackoverflow.com/questions/4801122/how-to-stop-time-from-running-backwards-on-linux
I added some debugging information to the semaphore locking code. Please update to commit
eb625af119067f6d702211542d88a28ccb57ad2c of src/system.cxx (plus small change in include/msystem.h) and try again.
Now for each timeout it will print detailed syscall and timing information, if time goes backwards, it should catch it.
K.O. |
2884
|
28 Oct 2024 |
Amy Roberts | Bug Report | Difficulty running MIDAS on Rocky 9.4 | > Now for each timeout it will print detailed syscall and timing information, if time goes backwards, it should catch it.
It appears that time is moving forward:
[aroberts@sdfcdmsdaq build]$ odbedit
[ODBEdit,ERROR] [odb.cxx:2043:db_open_database,ERROR] Removed ODB client 'ODBEdit', index 0 because process pid 1617119 does
not exists
[ODBEdit,INFO] Removed open record flag from "/Experiment/Security/RPC hosts/Allowed hosts"
[ODBEdit,INFO] Removed exclusive access mode from "/Experiment/Security/RPC hosts/Allowed hosts"
[ODBEdit,INFO] Corrected 1 ODB entries
[ODBEdit,INFO] Deleted entry '/System/Clients/1617119' for client 'ODBEdit' because it is not connected to ODB
[ODBEdit,INFO] Client 'ODBEdit' on buffer 'SYSMSG' removed by bm_open_buffer because process pid 1617119 does not exist
[local:amy_test:S]/>ss_semaphore_wait_for: semop/semtimedop(5) returned -1, errno 11 (Resource temporarily unavailable),
start time 0xd4fd98f6, now 0xd4fdc0ef, dt 0x000027f9, timeout 0x00002710 ms, SEMAPHORE TIMEOUT!
[ODBEdit,ERROR] [odb.cxx:2489:db_lock_database,ERROR] cannot lock ODB semaphore, timeout 10000 ms, aborting...
Aborted (core dumped) |
2889
|
06 Nov 2024 |
Amy Roberts | Bug Report | Difficulty running MIDAS on Rocky 9.4 | After following Konstantin's debugging suggestions, I thought I would try to replicate
the issue on my own computer. My hope was that I could provide instructions for
replicating the bug so that the MIDAS team could try debugging things more easily.
However, when I ran the current version of MIDAS in a Rocky 9.4 VM on my laptop (both
VMWare and VirtualBox), mserver and odbedit ran just fine (!).
I'm currently trying to find out if there's a way to compare the VMs on my machine and
the machine that's being problematic, I'll report back if I learn anything. |
1822
|
12 Feb 2020 |
Marius Koeppel | Forum | Difference between "Event Data Size" and "All Bank Size" | Dear all,
we are trying to build Midas events on FPGAs and send them directly to the midas
ring buffer via copy_n. According to the wiki
https://midas.triumf.ca/MidasWiki/index.php/Event_Structure Event Data Size is:
"The event data size contains the size of the event in bytes excluding the
header." and All Bank Size is: "Size in bytes of the following data plus the
size of the bank header". So are they actually the same or what header is the
header in the first sentence also the bank header?
Cheers,
Marius
|
1823
|
12 Feb 2020 |
Stefan Ritt | Forum | Difference between "Event Data Size" and "All Bank Size" | Thanks for pointing out this error. The "All Bank Size" contains the size of all banks including their
bank headers, but NOT the global bank header itself. I modified the documentation accordingly.
If you want to study the C code which tells you how to fill these headers, look at midas.cxx line
14788.
Best,
Stefan
> Dear all,
>
> we are trying to build Midas events on FPGAs and send them directly to the midas
> ring buffer via copy_n. According to the wiki
> https://midas.triumf.ca/MidasWiki/index.php/Event_Structure Event Data Size is:
> "The event data size contains the size of the event in bytes excluding the
> header." and All Bank Size is: "Size in bytes of the following data plus the
> size of the bank header". So are they actually the same or what header is the
> header in the first sentence also the bank header?
>
> Cheers,
> Marius
>
> |
1839
|
20 Feb 2020 |
Konstantin Olchanski | Forum | Difference between "Event Data Size" and "All Bank Size" | > Thanks for pointing out this error. The "All Bank Size" contains the size of all banks including their
> bank headers, but NOT the global bank header itself. I modified the documentation accordingly.
>
> If you want to study the C code which tells you how to fill these headers, look at midas.cxx line
> 14788.
Also take a look at the midas event parser in ROOTANA midasio.cxx, the code is pretty clean c++
https://bitbucket.org/tmidas/rootana/src/master/libMidasInterface/midasio.cxx
But Stefan's code in midas.cxx and in the documentation is the authoritative information.
K.O. |
2622
|
23 Oct 2023 |
Francesco Renga | Forum | Device with inputs and outputs | Dear all,
I'm writing a very simple device driver starting from the nulldev.cxx
example.
I define an equipment as reported at the end of this message then, if all
variables are Input variables, I define them with:
mdevice device("myEquimpent", "Input", DF_INPUT | DF_MULTITHREAD, mydevice);
device.define_var("Var1", 0.1);
device.define_var("Var2", 0.1);
...
If all variables are output variables, I define them with:
mdevice device("myEquipment", "Output", DF_OUTPUT | DF_MULTITHREAD, mydevice);
device.define_var("Var1", 0.1);
device.define_var("Var2", 0.1);
But I don't know what to do if I have mixed input and output variables in the same
device. I think I can do:
mdevice device_in("myEquipment", "Input", DF_INPUT | DF_MULTITHREAD, mydevice);
device.define_var("Var1", 0.1);
mdevice device_out("myEquipment", "Output", DF_OUTPUT | DF_MULTITHREAD,
mydevice);
device.define_var("Var2", 0.1);
but in this case, inside mydevice.cxx, I don't know how to distinguish Var1 and
Var2, because they are both identified as channel 0.
Do you have any suggestion?
Thank you,
Francesco
-------------------------------------------------------------------
{"SourceMotor", /* equipment name */
{7, 0, /* event ID, trigger mask */
"SYSTEM", /* event buffer */
EQ_SLOW, /* equipment type */
0, /* event source */
"MIDAS", /* format */
TRUE, /* enabled */
RO_ALWAYS, /* read when running and on transitions */
60000, /* read every 60 sec */
0, /* stop run after this event limit */
0, /* number of sub events */
1, /* log history every event */
"", "", ""} ,
cd_multi_read, /* readout routine */
cd_multi, /* class driver main routine */
}, |
2623
|
24 Oct 2023 |
Stefan Ritt | Forum | Device with inputs and outputs | The "multi" class driver takes care of that. It properly calls the SET and GET functions
with the correct index. The code for that is in multi.cxx:105:
device_driver(m_info->driver_input[i], CMD_GET,
i - m_info->channel_offset_input[i],
&m_info->var_input[i]);
The "channel_offset_input" and "channel_offset_output" store the first index of the
channel in the overall ODB array (where inputs and outputs are staggered together), so
the device_driver is always called with an index 0...n each for input and output, but
with different commands CMD_GET and CMD_SET. You can take the mscbdev.cxx device driver
as a working example.
Stefan |
2309
|
16 Dec 2021 |
Zaher Salman | Forum | Device driver for modbus | Dear all, does anyone have an example of for a device driver using modbus or modbus tcp to communicate with a device and willing to share it? Thanks. |
|