ID |
Date |
Author |
Topic |
Subject |
1342
|
28 Feb 2018 |
Thomas Lindner | Bug Report | Problems with start program button with new mhttpd webpages | Pierre Gorel identified a problem with the 'start program' button on the new version of MIDAS that uses the
mjsonrpc functions for building the webpages. In particular, he tracked the problem down to some
questionable std::string / char* handling.
Interestingly, the particular 'start program' problem was seen on Pierre's Ubuntu 16.04.3 LTS machine, but
could not be reproduced on RHEL-7 or Macos 10.13 machines. So the manifestation of the code error
seemed to depend on the compiler.
The problem should now be fixed in the HEAD version of MIDAS. If you are using the newer MIDAS (since last
summer), particularly on Ubuntu, then you may want to update your installation.
Details of the problem are on the bitbucket issue tracker:
https://bitbucket.org/tmidas/midas/issues/132/corruption-of-char-in-mjsonrpccxx |
740
|
17 Jan 2011 |
Andreas Suter | Bug Report | Problems with midas history SVN 4936 | I have the following problems after updating to midas SVN 4936: the history
system (web-page via mhttpd) seems to stop working. I checked the history files
themself and they are indeed written, except that the events ID's are not the
same anymore (I mean the ones defined under /Equipment/XXX/Common/Event ID),
rather the mlogger seems to choose an ID by itself.
Currently the only way to get things working again was to recompile midas with
adding -DOLD_HISTORY to the CFLAGS which is troublesome since it is likely to be
forgotton with the next SVN update. When looking into the SVN I have the
impression there is something going on concerning the history system, however
I couldn't find any documentation.
What is the best practice for the future, in order not to run into any problems
but still being able to look at the old history (also from within the web-page
via mhttpd)? |
742
|
13 Feb 2011 |
Lee Pool | Bug Report | Problems with midas history SVN 4936 | > I have the following problems after updating to midas SVN 4936: the history
> system (web-page via mhttpd) seems to stop working. I checked the history files
> themself and they are indeed written, except that the events ID's are not the
> same anymore (I mean the ones defined under /Equipment/XXX/Common/Event ID),
> rather the mlogger seems to choose an ID by itself.
> Currently the only way to get things working again was to recompile midas with
> adding -DOLD_HISTORY to the CFLAGS which is troublesome since it is likely to be
> forgotton with the next SVN update. When looking into the SVN I have the
> impression there is something going on concerning the history system, however
> I couldn't find any documentation.
> What is the best practice for the future, in order not to run into any problems
> but still being able to look at the old history (also from within the web-page
> via mhttpd)?
Hi...
Do you mind giving little more detail? We might have the same issue, where we got
complaints that midas history stops working after a certain time.
L |
746
|
16 Feb 2011 |
Konstantin Olchanski | Bug Report | Problems with midas history SVN 4936 | > I have the following problems after updating to midas SVN 4936: the history
> system (web-page via mhttpd) seems to stop working. I checked the history files
> themself and they are indeed written, except that the events ID's are not the
> same anymore (I mean the ones defined under /Equipment/XXX/Common/Event ID),
> rather the mlogger seems to choose an ID by itself.
Yes, I found the problem - it was introduced around svn rev 4827 in September 2010.
It is fixed now, please do this:
1) update history_midas.c to latest svn rev 4979
1a) do NOT update any other files - update only history_midas.c
2) rebuild mlogger (it will do no harm and no good if you rebuild everything)
3) odbedit save odb.xml
4) in odb, remove /history/events and /history/tags (you can also set "/History/DisableTags" to "y")
5) restart mlogger
6) observe that odb /history/events now has event ids same as equipment ids
7) restart your frontend, observe that history file is growing
8) use mhdump to observe that history is now written with correct event id
9) go to mhttpd history plot, you should see the new data coming in. Plot history in the "1 year" scale, you
should see the old data and you should see a gap where data was written with wrong event id
10) I should still have an mhrewrite program sitting somewhere that can change the event ids inside midas
history files, if you have many data files with wrong event id, let me know, I will find this program and tell you
how to use it to repair your data files.
> Currently the only way to get things working again was to recompile midas with
> adding -DOLD_HISTORY to the CFLAGS which is troublesome since it is likely to be
> forgotton with the next SVN update.
Yes, I am glad you found OLD_HISTORY, I kept it just for the case some breakage like this happens. I will still
keep it around until the dust settles.
> When looking into the SVN I have the impression there is something going on concerning the history
system, however I couldn't find any documentation.
Yes, you found the right stuff, and it is partially documented. mlogger uses /History/Events to map history
event names (equipment names in your case) to history event ids. But in your case, the wrong event id has
been assigned by mlogger so nothing worked right. As a bonus, I now see inconsistency between event_id
code remaining in mlogger (which is not used) and event_id code in history_midas (which *is* used). I will be
straightening this stuff over the next few days.
I hope my correction to history_midas.cxx is good enough to get you going for now.
> What is the best practice for the future, in order not to run into any problems
> but still being able to look at the old history (also from within the web-page
> via mhttpd)?
Personally, I think that the midas history storage into binary files is not robust enough
when facing changes to equipment and event ids, renaming and deleting of stuff, etc. There
are other limitations, as well, i.e. the 16-bit history event id, etc.
The newly implemented SQL history storage (uses ODBC layer, MySQL supported, PgSQL partially
implemented) does not have any of these problems and seems to work well enough
for T2K/ND280. Sometimes MySQL history is even faster when making history plots in mhttpd.
I am now thinking about implementing SQL history storage in SQLite files, and it will not have
any of these problems, too. Performance and robustness for database corruption remain a question, though.
K.O. |
747
|
16 Feb 2011 |
Konstantin Olchanski | Bug Report | Problems with midas history SVN 4936 | It looks like email notices did not go the first time. Please read my replies below. K.O.
> > I have the following problems after updating to midas SVN 4936: the history
> > system (web-page via mhttpd) seems to stop working. I checked the history files
> > themself and they are indeed written, except that the events ID's are not the
> > same anymore (I mean the ones defined under /Equipment/XXX/Common/Event ID),
> > rather the mlogger seems to choose an ID by itself.
>
> Yes, I found the problem - it was introduced around svn rev 4827 in September 2010.
>
> It is fixed now, please do this:
> 1) update history_midas.c to latest svn rev 4979
> 1a) do NOT update any other files - update only history_midas.c
> 2) rebuild mlogger (it will do no harm and no good if you rebuild everything)
> 3) odbedit save odb.xml
> 4) in odb, remove /history/events and /history/tags (you can also set "/History/DisableTags" to "y")
> 5) restart mlogger
> 6) observe that odb /history/events now has event ids same as equipment ids
> 7) restart your frontend, observe that history file is growing
> 8) use mhdump to observe that history is now written with correct event id
> 9) go to mhttpd history plot, you should see the new data coming in. Plot history in the "1 year" scale, you
> should see the old data and you should see a gap where data was written with wrong event id
> 10) I should still have an mhrewrite program sitting somewhere that can change the event ids inside midas
> history files, if you have many data files with wrong event id, let me know, I will find this program and tell you
> how to use it to repair your data files.
>
> > Currently the only way to get things working again was to recompile midas with
> > adding -DOLD_HISTORY to the CFLAGS which is troublesome since it is likely to be
> > forgotton with the next SVN update.
>
> Yes, I am glad you found OLD_HISTORY, I kept it just for the case some breakage like this happens. I will still
> keep it around until the dust settles.
>
> > When looking into the SVN I have the impression there is something going on concerning the history
> system, however I couldn't find any documentation.
>
> Yes, you found the right stuff, and it is partially documented. mlogger uses /History/Events to map history
> event names (equipment names in your case) to history event ids. But in your case, the wrong event id has
> been assigned by mlogger so nothing worked right. As a bonus, I now see inconsistency between event_id
> code remaining in mlogger (which is not used) and event_id code in history_midas (which *is* used). I will be
> straightening this stuff over the next few days.
>
> I hope my correction to history_midas.cxx is good enough to get you going for now.
>
> > What is the best practice for the future, in order not to run into any problems
> > but still being able to look at the old history (also from within the web-page
> > via mhttpd)?
>
> Personally, I think that the midas history storage into binary files is not robust enough
> when facing changes to equipment and event ids, renaming and deleting of stuff, etc. There
> are other limitations, as well, i.e. the 16-bit history event id, etc.
>
> The newly implemented SQL history storage (uses ODBC layer, MySQL supported, PgSQL partially
> implemented) does not have any of these problems and seems to work well enough
> for T2K/ND280. Sometimes MySQL history is even faster when making history plots in mhttpd.
>
> I am now thinking about implementing SQL history storage in SQLite files, and it will not have
> any of these problems, too. Performance and robustness for database corruption remain a question, though.
>
> K.O. |
748
|
16 Feb 2011 |
Konstantin Olchanski | Bug Report | Problems with midas history SVN 4936 | >
> Do you mind giving little more detail? We might have the same issue, where we got
> complaints that midas history stops working after a certain time.
>
Yes, please do supply more information. What problems do *you* see?
K.O. |
751
|
16 Feb 2011 |
Lee Pool | Bug Report | Problems with midas history SVN 4936 | > >
> > Do you mind giving little more detail? We might have the same issue, where we got
> > complaints that midas history stops working after a certain time.
> >
>
>
> Yes, please do supply more information. What problems do *you* see?
>
>
> K.O.
Hi.
uhm, mine might be completely unrelated to this, but it just so happened that the rev.
4936 was one that was used in a recent experiment, in which there was complaints about
the responsiveness of the history plots. The history plots would take up to 30 seconds
to respond, if the run was about 30-40 minutes old. When the run is about < 10 minutes
old , the history plot was responsive to within 1-2 seconds.
I received rather limited information regarding this problem. So hence my apprehension
on stating it as a *problem* or bug. It could be something related to hardware/beam etc.
Lee |
752
|
17 Feb 2011 |
Stefan Ritt | Bug Report | Problems with midas history SVN 4936 | > uhm, mine might be completely unrelated to this, but it just so happened that the rev.
> 4936 was one that was used in a recent experiment, in which there was complaints about
> the responsiveness of the history plots. The history plots would take up to 30 seconds
> to respond, if the run was about 30-40 minutes old. When the run is about < 10 minutes
> old , the history plot was responsive to within 1-2 seconds.
Ah, that rings a bell. How big are your history files on disk? How much RAM do you have?
What I see in our experiment is that linux buffers everything I write to disk in a cache
located in RAM. But this cache is limited, so after a certain time it's overwritten. Now
this is handled by the OS, the application (mlogger in this case) has no influence on it.
Let's say you write 5 MB/minute of history, and your cache is 50 MB large. Then after 10
minutes you can still read the history data from the RAM cache which is ~10x faster than
your disk. But your older history data (30-40 min) is flushed out of the cache and has to be
re-read from disk. A typical symptom of this is that the first time you display this it
takes maybe 30 seconds, but if you do a "reload" of your page it goes much faster. In that
case the contents is cached again in RAM. If you observe this, you can almost be certain
that you see th "too small RAM cache" problem. In that case just add RAM and things should
run better (I use 16 GByte in my machine).
Best regards,
Stefan |
784
|
29 Feb 2012 |
Konstantin Olchanski | Bug Report | Problem with semaphores | Hi there! In the T2K/ND280 experiment in Japan, we keep having problems with MIDAS locking (probably
of ODB). The symptoms are: some program reports a timeout waiting for the ODB lock, then all programs
eventually die with this same error. Complete system meltdown. This does not look like the deadlock
between locks for ODB, cm_msg and the data buffers that I looked into last year. It looks more like
somebody locks ODB, dies and the Linux kernel fails to unlock the lock (via the SYSV "sem undo"
function). But it is hard to confirm, hence this message:
The implementation of semaphores in MIDAS (used for locking ODB and the shared memory data buffers)
uses the straight SYSV semaphore API - which lacks basic debugging features - there is no tracking of
who locked what when, so if anything at all goes wrong at all, i.e. we are confronted with a timeout
waiting for the ODB lock, the only corrective action possible is to kill all MIDAS clients and tell the user to
start from scratch. There is no additional information available from the SYSV semaphore API to identify
which MIDAS program caused the fault.
The POSIX semaphore API is even worse - no debugging features are available, *and* if a program dies
while holding a lock, the lock stays locked forever (everybody else will wait forever or see a semaphore
timeout, and then what?).
So I am looking for an "advanced semaphore library" to use in MIDAS. In addition to the boring functions
of reliable locking and unlocking, it should support:
- wait with timeout
- remember who is holding the lock
- detect that the process holding the lock is dead and take corrective action (automatic unlock as done by
SYSV semaphores, call back to user code where we can cleanup and unlock ourselves, etc)
- maybe permit recursive locking (not really required as ODB locks are already made recursive "by hand")
- maybe remember some of the locking history (so we can dump it into a log file when we detect a
deadlock or other lock malfunction).
Quick google search only find sundry wrappers for SYSV and POSIX semaphores. How they deal with the
problem of processes locking the semaphore and dying remains a mystery to me (other than telling users
to remove the Ctrl-C button from their keyboard). BTW, we have seen this problem with several
commercial applications that use SYSV semaphores but forget to enable the SEM_UNDO function).
Anyhow, if anybody can suggest such an advanced locking library it would be great. Will save me the
effort of writing one.
K.O. |
785
|
01 Mar 2012 |
Stefan Ritt | Bug Report | Problem with semaphores | > Anyhow, if anybody can suggest such an advanced locking library it would be great. Will save me the
> effort of writing one.
Hi Konstantin,
yes there is a good way, which I used during development of the buffer manager function. Put in each sm_xxx function a cm_msg(M_DEBUG, ...) to
generate a debug system message. They go only into the SYSMSG ring buffer and thus are light weight and don't influence the timing much. You can
keep odbedit open to see these messages, but there is also another way. You can write a little program which dumps the whole SYSMSG buffer, which
you can call when the lock happens. You then look "backwards" in time and get all messages stored there, depending of the size of the SYSMSG buffer of
course. Of course this only works if the lock does not happen on the SYSMSB buffer itself. In that case you have to produce M_LOG messages which are
written to the logging file. This will influence the timing slightly (the file might grow rapidly) but you are independent of semaphores.
The interesting thing is that in the MEG experiment (9 Front-ends, Event Builder, Logger, Lazylogger, ....) we run for months without any lock up. So I
might suspect it's caused in your case from a program only you are using.
Best regards,
Stefan |
2475
|
26 Apr 2023 |
Martin Mueller | Forum | Problem with running midas odbxx frontends on a remote machine using the -h option | Hi
We have a problem with running midas frontends when they should connect to a experiment on a different machine using the -h option. Starting them locally works fine. Firewall is off on both systems, Enable non-localhost RPC and Disable RPC hosts check are set to 'yes', mserver is running on the machine that we want to connect to.
Error message looks like this:
...
Connect to experiment Mu3e on host 10.32.113.210...
OK
Init hardware...
terminate called after throwing an instance of 'mexception'
what():
/home/mu3e/midas/include/odbxx.h:1102: Wrong key type in XML file
Stack trace:
1 0x00000000000042D828 (null) + 4380712
2 0x00000000000048ED4D midas::odb::odb_from_xml(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 605
3 0x0000000000004999BD midas::odb::odb(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 317
4 0x000000000000495383 midas::odb::read_key(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 1459
5 0x0000000000004971E3 midas::odb::connect(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool, bool) + 259
6 0x000000000000497636 midas::odb::connect(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool, bool) + 502
7 0x00000000000049883B midas::odb::connect_and_fix_structure(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 171
8 0x0000000000004385EF setup_odb() + 8351
9 0x00000000000043B2E6 frontend_init() + 22
10 0x000000000000433304 main + 1540
11 0x0000007F8C6FE3724D __libc_start_main + 239
12 0x000000000000433F7A _start + 42
Aborted (core dumped)
We have the same problem for all our frontends. When we want to start them locally they work. Starting them locally with ./frontend -h localhost also reproduces the error above.
The error can also be reproduced with the odbxx_test.cxx example in the midas repo by replacing line 22 in midas/examples/odbxx/odbxx_test.cxx (cm_connect_experiment(NULL, NULL, "test", NULL);) with cm_connect_experiment("localhost", "Mu3e", "test", NULL); (Put the name of the experiment instead of "Mu3e")
running odbxx_test locally gives us then the same error as our other frontend.
Thanks in advance,
Martin |
Draft
|
27 Apr 2023 |
Konstantin Olchanski | Forum | Problem with running midas odbxx frontends on a remote machine using the -h option | > H
> /home/mu3e/midas/include/odbxx.h:1102: Wrong key type in XML file
> Stack trace:
> 1 0x00000000000042D828 (null) + 4380712
> 2 0x00000000000048ED4D midas::odb::odb_from_xml(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 605
> 3 0x0000000000004999BD midas::odb::odb(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 317
> 4 0x000000000000495383 midas::odb::read_key(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 1459
> 5 0x0000000000004971E3 midas::odb::connect(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool, bool) + 259
> 6 0x000000000000497636 midas::odb::connect(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool, bool) + 502
> 7 0x00000000000049883B midas::odb::connect_and_fix_structure(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 171
> 8 0x0000000000004385EF setup_odb() + 8351
> 9 0x00000000000043B2E6 frontend_init() + 22
> 10 0x000000000000433304 main + 1540
> 11 0x0000007F8C6FE3724D __libc_start_main + 239
> 12 0x000000000000433F7A _start + 42
>
> Aborted (core dumped)
>
>
> We have the same problem for all our frontends. When we want to start them locally they work. Starting them locally with ./frontend -h localhost also reproduces the error above.
>
> The error can also be reproduced with the odbxx_test.cxx example in the midas repo by replacing line 22 in midas/examples/odbxx/odbxx_test.cxx (cm_connect_experiment(NULL, NULL, "test", NULL);) with cm_connect_experiment("localhost", "Mu3e", "test", NULL); (Put the name of the experiment instead of "Mu3e")
>
> running odbxx_test locally gives us then the same error as our other frontend.
>
> Thanks in advance,
> Martin |
2479
|
27 Apr 2023 |
Konstantin Olchanski | Forum | Problem with running midas odbxx frontends on a remote machine using the -h option | Looks like your MIDAS is built without debug information (-O2 -g), the stack trace does not have file names and line numbers. Please rebuild with debug information and report the stack trace. Thanks. K.O.
> Connect to experiment Mu3e on host 10.32.113.210...
> OK
> Init hardware...
> terminate called after throwing an instance of 'mexception'
> what():
> /home/mu3e/midas/include/odbxx.h:1102: Wrong key type in XML file
> Stack trace:
> 1 0x00000000000042D828 (null) + 4380712
> 2 0x00000000000048ED4D midas::odb::odb_from_xml(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 605
> 3 0x0000000000004999BD midas::odb::odb(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 317
> 4 0x000000000000495383 midas::odb::read_key(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 1459
> 5 0x0000000000004971E3 midas::odb::connect(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool, bool) + 259
> 6 0x000000000000497636 midas::odb::connect(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool, bool) + 502
> 7 0x00000000000049883B midas::odb::connect_and_fix_structure(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 171
> 8 0x0000000000004385EF setup_odb() + 8351
> 9 0x00000000000043B2E6 frontend_init() + 22
> 10 0x000000000000433304 main + 1540
> 11 0x0000007F8C6FE3724D __libc_start_main + 239
> 12 0x000000000000433F7A _start + 42
>
> Aborted (core dumped)
K.O. |
2484
|
28 Apr 2023 |
Martin Mueller | Forum | Problem with running midas odbxx frontends on a remote machine using the -h option | > Looks like your MIDAS is built without debug information (-O2 -g), the stack trace does not have file names and line numbers. Please rebuild with debug information and report the stack trace. Thanks. K.O.
>
> > Connect to experiment Mu3e on host 10.32.113.210...
> > OK
> > Init hardware...
> > terminate called after throwing an instance of 'mexception'
> > what():
> > /home/mu3e/midas/include/odbxx.h:1102: Wrong key type in XML file
> > Stack trace:
> > 1 0x00000000000042D828 (null) + 4380712
> > 2 0x00000000000048ED4D midas::odb::odb_from_xml(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 605
> > 3 0x0000000000004999BD midas::odb::odb(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 317
> > 4 0x000000000000495383 midas::odb::read_key(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 1459
> > 5 0x0000000000004971E3 midas::odb::connect(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool, bool) + 259
> > 6 0x000000000000497636 midas::odb::connect(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool, bool) + 502
> > 7 0x00000000000049883B midas::odb::connect_and_fix_structure(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 171
> > 8 0x0000000000004385EF setup_odb() + 8351
> > 9 0x00000000000043B2E6 frontend_init() + 22
> > 10 0x000000000000433304 main + 1540
> > 11 0x0000007F8C6FE3724D __libc_start_main + 239
> > 12 0x000000000000433F7A _start + 42
> >
> > Aborted (core dumped)
>
> K.O.
As i said we can easily reproduce this with midas/examples/odbxx/odbxx_test.cpp (with cm_connect_experiment changed to "localhost")
Stack trace of odbxx_test with line numbers:
Set ODB key "/Test/Settings/String Array 10[0...9]" = ["","","","","","","","","",""]
Created ODB key "/Test/Settings/Large String Array 10"
Set ODB key "/Test/Settings/Large String Array 10[0...9]" = ["","","","","","","","","",""]
[test,ERROR] [system.cxx:5104:recv_tcp2,ERROR] unexpected connection closure
[test,ERROR] [system.cxx:5158:ss_recv_net_command,ERROR] error receiving network command header, see messages
[test,ERROR] [midas.cxx:13900:rpc_call,ERROR] routine "db_copy_xml": error, ss_recv_net_command() status 411, program abort
Program received signal SIGABRT, Aborted.
0x00007ffff6665cdb in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: zypper install libgcc_s1-debuginfo-11.3.0+git1637-150000.1.9.1.x86_64 libstdc++6-debuginfo-11.2.1+git610-1.3.9.x86_64 libz1-debuginfo-1.2.11-3.24.1.x86_64
(gdb) bt
#0 0x00007ffff6665cdb in raise () from /lib64/libc.so.6
#1 0x00007ffff6667375 in abort () from /lib64/libc.so.6
#2 0x0000000000431bba in rpc_call (routine_id=11249) at /home/labor/midas/src/midas.cxx:13904
#3 0x0000000000460c4e in db_copy_xml (hDB=1, hKey=1009608, buffer=0x7ffff7e9c010 "", buffer_size=0x7fffffffadbc, header=false) at /home/labor/midas/src/odb.cxx:8994
#4 0x000000000046fc4c in midas::odb::odb_from_xml (this=0x7fffffffb3f0, str=...) at /home/labor/midas/src/odbxx.cxx:133
#5 0x000000000040b3d9 in midas::odb::odb (this=0x7fffffffb3f0, str=...) at /home/labor/midas/include/odbxx.h:605
#6 0x000000000040b655 in midas::odb::odb (this=0x7fffffffb3f0, s=0x4a465a "/Test/Settings") at /home/labor/midas/include/odbxx.h:629
#7 0x0000000000407bba in main () at /home/labor/midas/examples/odbxx/odbxx_test.cxx:56
(gdb) |
2490
|
28 Apr 2023 |
Konstantin Olchanski | Forum | Problem with running midas odbxx frontends on a remote machine using the -h option | > As i said we can easily reproduce this with midas/examples/odbxx/odbxx_test.cpp (with cm_connect_experiment changed to "localhost")
> [test,ERROR] [system.cxx:5104:recv_tcp2,ERROR] unexpected connection closure
> [test,ERROR] [system.cxx:5158:ss_recv_net_command,ERROR] error receiving network command header, see messages
> [test,ERROR] [midas.cxx:13900:rpc_call,ERROR] routine "db_copy_xml": error, ss_recv_net_command() status 411, program abort
ok, cool. looks like we crashed the mserver. either run mserver attached to gdb or enable mserver core dump, we need it's stack trace,
the correct stack trace should be rooted in the handler for db_copy_xml.
but most likely odbxx is asking for more data than can be returned through the MIDAS RPC.
what is the ODB key passed to db_copy_xml() and how much data is in ODB at that key? (odbedit "du", right?).
K.O. |
2491
|
28 Apr 2023 |
Martin Mueller | Forum | Problem with running midas odbxx frontends on a remote machine using the -h option | > > As i said we can easily reproduce this with midas/examples/odbxx/odbxx_test.cpp (with cm_connect_experiment changed to "localhost")
> > [test,ERROR] [system.cxx:5104:recv_tcp2,ERROR] unexpected connection closure
> > [test,ERROR] [system.cxx:5158:ss_recv_net_command,ERROR] error receiving network command header, see messages
> > [test,ERROR] [midas.cxx:13900:rpc_call,ERROR] routine "db_copy_xml": error, ss_recv_net_command() status 411, program abort
>
> ok, cool. looks like we crashed the mserver. either run mserver attached to gdb or enable mserver core dump, we need it's stack trace,
> the correct stack trace should be rooted in the handler for db_copy_xml.
>
> but most likely odbxx is asking for more data than can be returned through the MIDAS RPC.
>
> what is the ODB key passed to db_copy_xml() and how much data is in ODB at that key? (odbedit "du", right?).
>
> K.O.
Ok. Maybe i have to make this more clear. ANY odbxx access of a remote odb reproduces this error for us on multiple machines.
It does not matter how much data odbxx is asking for.
Something as simple as this reproduces the error, asking for a single integer:
int main() {
cm_connect_experiment("localhost", "Mu3e", "test", NULL);
midas::odb o = {
{"Int32 Key", 42}
};
o.connect("/Test/Settings");
cm_disconnect_experiment();
return 1;
}
at the same time this runs fine:
int main() {
cm_connect_experiment(NULL, NULL, "test", NULL);
midas::odb o = {
{"Int32 Key", 42}
};
o.connect("/Test/Settings");
cm_disconnect_experiment();
return 1;
}
in both cases mserver does not crash. I do not have a stack trace. There is also no error produced by mserver.
Last year we did not have these problems with the same midas frontends (For example in midas commit 9d2ef471 the code from above runs
fine). I am trying to pinpoint the exact commit where this stopped working now. |
2492
|
28 Apr 2023 |
Konstantin Olchanski | Forum | Problem with running midas odbxx frontends on a remote machine using the -h option | > > > As i said we can easily reproduce this with midas/examples/odbxx/odbxx_test.cpp
> > ok, cool. looks like we crashed the mserver.
> Ok. Maybe i have to make this more clear. ANY odbxx access of a remote odb reproduces this error for us on multiple machines.
> It does not matter how much data odbxx is asking for.
> midas commit 9d2ef471 the code from above runs fine
so, a regression. ouch.
if core dumps are turned off, you will not "see" the mserver crash, because the main mserver is still running. it's the mserver forked to
serve your RPC connection that crashes.
> int main() {
> cm_connect_experiment("localhost", "Mu3e", "test", NULL);
> midas::odb o = {
> {"Int32 Key", 42}
> };
> o.connect("/Test/Settings");
> cm_disconnect_experiment();
> return 1;
> }
to debug this, after cm_connect_experiment() one has to put ::sleep(1000000000); (not that big, obviously),
then while it is sleeping do "ps -efw | grep mserver", this will show the mserver for the test program,
connect to it with gdb, wait for ::sleep() to finish and o.connect() to crash, with luck gdb will show
the crash stack trace in the mserver.
so easy to debug? this is why back in the 1970-ies clever people invented core dumps, only to have
even more clever people in the 2020-ies turn them off and generally make debugging more difficult (attaching
gdb to a running program is also disabled-by-default in some recent linuxes).
rant-off.
to check if core dumps work, to "killall -7 mserver". to enable core dumps on ubuntu, see here:
https://daq00.triumf.ca/DaqWiki/index.php/Ubuntu
last known-working point is:
commit 9d2ef471c4e4a5a325413e972862424549fa1ed5
Author: Ben Smith <bsmith@triumf.ca>
Date: Wed Jul 13 14:45:28 2022 -0700
Allow odbxx to handle connecting to "/" (avoid trying to read subkeys as "//Equipment" etc.
K.O. |
2500
|
02 May 2023 |
Niklaus Berger | Forum | Problem with running midas odbxx frontends on a remote machine using the -h option | Thanks for all the helpful hints. When finally managing to evade all timeouts and attach the debugger in just the right moment, we find that we get a segfault in mserver at L827:
case RPC_DB_COPY_XML:
status = db_copy_xml(CHNDLE(0), CHNDLE(1), CSTRING(2), CPINT(3), CBOOL(4));
Some printf debugging then pointed us to the fact that the culprit is the pointer de-referencing in CBOOL(4). This in turn can be traced back to mrpc.cxx L282 ff, where the line with the arrow was missing:
{RPC_DB_COPY_XML, "db_copy_xml",
{{TID_INT32, RPC_IN},
{TID_INT32, RPC_IN},
{TID_ARRAY, RPC_OUT | RPC_VARARRAY},
{TID_INT32, RPC_IN | RPC_OUT},
-> {TID_BOOL, RPC_IN},
{0}}},
If we put that in, the mserver process completes peacfully and we get a segfault in the client ("Wrong key type in XML file") which we will attempt to debug next. Shall I create a pull request for the additional RPC argument or will you just fix this on the fly? |
2501
|
02 May 2023 |
Niklaus Berger | Forum | Problem with running midas odbxx frontends on a remote machine using the -h option | And now we also fixed the client segfault, odb.cxx L8992 also needs to know about the header:
if (rpc_is_remote())
return rpc_call(RPC_DB_COPY_XML, hDB, hKey, buffer, buffer_size, header);
(last argument was missing before). |
2502
|
02 May 2023 |
Stefan Ritt | Forum | Problem with running midas odbxx frontends on a remote machine using the -h option |
> Shall I create a pull request for the additional RPC argument or will you just fix this on the fly?
Just fix it in the fly yourself. It’s an obvious bug, so please commit to develop.
Stefan |
|