Back Midas Rome Roody Rootana
  Midas DAQ System, Page 105 of 142  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
ID Date Author Topic Subject
  756   15 Apr 2011 Jonathan ToebbeForumCan't get example frontend to talk to khyt1331 kernel driver
I'm brand new to MIDAS, and C system programming in general, so please be
gentle. I've compiled and installed MIDAS 2.3.0 on Ubuntu 10.04 LTS. I've built
the kernel driver, khyt1331.ko and installed it. It appears to be working, since
the camactest and esonetest programs included with the driver work just fine.

So I attempted to build the example experiment distributed with MIDAS, with the
following changes to the Makefile:

DRV_DIR   = $(MIDASSYS)/drivers/kernel/khyt1331_26

and

DRIVER = camac

The programs build without error but when I try to start the frontend, I get:

$ ./frontend
Frontend name          :     CSM-Nuclear Portable DAQ Frontend
Event buffer size      :     1000000
User max event size    :     10000
User max frag. size    :     5242880
# of events per buffer :     100

Connect to experiment...
*** buffer overflow detected ***: ./frontend terminated
======= Backtrace: =========
/lib/tls/i686/cmov/libc.so.6(__fortify_fail+0x50)[0x6de390]
/lib/tls/i686/cmov/libc.so.6(+0xe12ca)[0x6dd2ca]
/lib/tls/i686/cmov/libc.so.6(__strcpy_chk+0x44)[0x6dc644]
./frontend[0x805611f]
./frontend[0x806f656]
./frontend[0x8053d82]
/lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe6)[0x612bd6]
./frontend[0x804bb81]
======= Memory map: ========
00110000-0012d000 r-xp 00000000 08:05 7471187    /lib/libgcc_s.so.1
0012d000-0012e000 r--p 0001c000 08:05 7471187    /lib/libgcc_s.so.1
0012e000-0012f000 rw-p 0001d000 08:05 7471187    /lib/libgcc_s.so.1
00264000-00277000 r-xp 00000000 08:05 7603242    /lib/tls/i686/cmov/libnsl-2.11.1.so
00277000-00278000 r--p 00012000 08:05 7603242    /lib/tls/i686/cmov/libnsl-2.11.1.so
00278000-00279000 rw-p 00013000 08:05 7603242    /lib/tls/i686/cmov/libnsl-2.11.1.so
00279000-0027b000 rw-p 00000000 00:00 0 
002db000-002dd000 r-xp 00000000 08:05 7603265   
/lib/tls/i686/cmov/libutil-2.11.1.so
002dd000-002de000 r--p 00001000 08:05 7603265   
/lib/tls/i686/cmov/libutil-2.11.1.so
002de000-002df000 rw-p 00002000 08:05 7603265   
/lib/tls/i686/cmov/libutil-2.11.1.so
003b1000-003c6000 r-xp 00000000 08:05 7603257   
/lib/tls/i686/cmov/libpthread-2.11.1.so
003c6000-003c7000 r--p 00014000 08:05 7603257   
/lib/tls/i686/cmov/libpthread-2.11.1.so
003c7000-003c8000 rw-p 00015000 08:05 7603257   
/lib/tls/i686/cmov/libpthread-2.11.1.so
003c8000-003ca000 rw-p 00000000 00:00 0 
004ea000-004f1000 r-xp 00000000 08:05 7603261    /lib/tls/i686/cmov/librt-2.11.1.so
004f1000-004f2000 r--p 00006000 08:05 7603261    /lib/tls/i686/cmov/librt-2.11.1.so
004f2000-004f3000 rw-p 00007000 08:05 7603261    /lib/tls/i686/cmov/librt-2.11.1.so
005fb000-005fc000 r-xp 00000000 00:00 0          [vdso]
005fc000-0074f000 r-xp 00000000 08:05 7603231    /lib/tls/i686/cmov/libc-2.11.1.so
0074f000-00750000 ---p 00153000 08:05 7603231    /lib/tls/i686/cmov/libc-2.11.1.so
00750000-00752000 r--p 00153000 08:05 7603231    /lib/tls/i686/cmov/libc-2.11.1.so
00752000-00753000 rw-p 00155000 08:05 7603231    /lib/tls/i686/cmov/libc-2.11.1.so
00753000-00756000 rw-p 00000000 00:00 0 
00783000-00796000 r-xp 00000000 08:05 7471302    /lib/libz.so.1.2.3.3
00796000-00797000 r--p 00012000 08:05 7471302    /lib/libz.so.1.2.3.3
00797000-00798000 rw-p 00013000 08:05 7471302    /lib/libz.so.1.2.3.3
008ab000-008c6000 r-xp 00000000 08:05 7471129    /lib/ld-2.11.1.so
008c6000-008c7000 r--p 0001a000 08:05 7471129    /lib/ld-2.11.1.so
008c7000-008c8000 rw-p 0001b000 08:05 7471129    /lib/ld-2.11.1.so
008e4000-00908000 r-xp 00000000 08:05 7603239    /lib/tls/i686/cmov/libm-2.11.1.so
00908000-00909000 r--p 00023000 08:05 7603239    /lib/tls/i686/cmov/libm-2.11.1.so
00909000-0090a000 rw-p 00024000 08:05 7603239    /lib/tls/i686/cmov/libm-2.11.1.so
08048000-0809d000 r-xp 00000000 08:11 20318114   /home/midas/online/test/frontend
0809d000-0809e000 r--p 00055000 08:11 20318114   /home/midas/online/test/frontend
0809e000-080a3000 rw-p 00056000 08:11 20318114   /home/midas/online/test/frontend
080a3000-080c5000 rw-p 00000000 00:00 0 
0835f000-08380000 rw-p 00000000 00:00 0          [heap]
b7881000-b7884000 rw-p 00000000 00:00 0 
b7893000-b7895000 rw-p 00000000 00:00 0 
bf938000-bf94d000 rw-p 00000000 00:00 0          [stack]
Aborted

Please help me figure out what's going wrong!

Thank you,
Jon
  755   30 Mar 2011 Stefan RittForumHow large does "bank32" support?
> Reading an FADC buffer often needs large buffer size, especially while several
> FADCs work together. I want to know how large a bank32 can support.

The "32" in bank32 means 32-bits, so the bank holds 2^32=4 GBytes, hope that is enough for your ADC. 
The "normal" bank has only a 16-bit header, so it can hold only 64 kBytes. But for small banks, the overhead 
is therefore smaller.
  754   30 Mar 2011 Exaos LeeForumHow large does "bank32" support?
Reading an FADC buffer often needs large buffer size, especially while several
FADCs work together. I want to know how large a bank32 can support.
  753   28 Feb 2011 Konstantin OlchanskiInfojavascript example experiment
I just committed a MIDAS example for using most mhttpd html and javascript functions: ODBGet(), 
ODBSet(), ODBRpc() & co.

Please checkout .../midas/examples/javascript1, and follow instructions in the README file.

For your enjoyment, the example html file is attached to this message.

(to use the function ODBRpc_rev1 - the midas rpc call that returns data to the web page - you need 
mhttpd svn rev 4994 committed today. Other functions should work with any revision of mhttpd)

svn rev 4992.
K.O.
Attachment 1: example.html
  752   17 Feb 2011 Stefan RittBug ReportProblems with midas history SVN 4936
> uhm, mine might be completely unrelated to this, but it just so happened that the rev.
> 4936 was one that was used in a recent experiment, in which there was complaints about
> the responsiveness of the history plots. The history plots would take up to 30 seconds
> to respond, if the run was about 30-40 minutes old. When the run is about  < 10 minutes
> old , the history plot was responsive to within 1-2 seconds.

Ah, that rings a bell. How big are your history files on disk? How much RAM do you have?

What I see in our experiment is that linux buffers everything I write to disk in a cache 
located in RAM. But this cache is limited, so after a certain time it's overwritten. Now 
this is handled by the OS, the application (mlogger in this case) has no influence on it. 
Let's say you write 5 MB/minute of history, and your cache is 50 MB large. Then after 10 
minutes you can still read the history data from the RAM cache which is ~10x faster than 
your disk. But your older history data (30-40 min) is flushed out of the cache and has to be 
re-read from disk. A typical symptom of this is that the first time you display this it 
takes maybe 30 seconds, but if you do a "reload" of your page it goes much faster. In that 
case the contents is cached again in RAM. If you observe this, you can almost be certain 
that you see th "too small RAM cache" problem. In that case just add RAM and things should 
run better (I use 16 GByte in my machine).

Best regards,

  Stefan
  751   16 Feb 2011 Lee PoolBug ReportProblems with midas history SVN 4936
> > 
> > Do you mind giving little more detail? We might have the same issue, where we got
> > complaints that midas history stops working after a certain time.
> > 
> 
> 
> Yes, please do supply more information. What problems do *you* see?
> 
> 
> K.O.

Hi.

uhm, mine might be completely unrelated to this, but it just so happened that the rev.
4936 was one that was used in a recent experiment, in which there was complaints about
the responsiveness of the history plots. The history plots would take up to 30 seconds
to respond, if the run was about 30-40 minutes old. When the run is about  < 10 minutes
old , the history plot was responsive to within 1-2 seconds.

I received rather limited information regarding this problem. So hence my apprehension
on stating it as a *problem* or bug. It could be something related to hardware/beam etc.

Lee
  750   16 Feb 2011 Konstantin OlchanskiBug Reportfixed. odb corruption, odb race condition?
> My torture test runs okey in my mac now, one remaining problem is spurious client removal caused
> by semaphore starvation...

My torture test runs okey on Linux and I do not see any problems with spurious client removal - actually
I do not see any strange longs waits for semaphores that I was seeing on MacOS. Must be another
proof that MacOS is years behind Linux in kernel technology (but parsecs ahead in user experience)

K.O.
  749   16 Feb 2011 Konstantin OlchanskiInfoNotes on MIDAS history
> 
> 1) PerVariableHistory.
> 
> The default value of 0 is intended to operate the midas history in "traditional" mode. In this mode:
> - there is one history record for each equipment
> - history record id is equal to the equipment id
> - /History/Events and /History/Tags are not required and can be safely deleted
>

I now commited an example experiment for testing and using non-per-variable history:
.../midas/examples/history1

I confirm that this example does work as expected after src/history_midas.cxx is updated to latest rev 4979 (today). I guess it also worked just 
fine before breakage in svn rev 4827 last September.

svn rev 4980.



Here is the README file:

Example experiment "history1"

Purpose:
example and test of a simple periodic frontend writing slow controls data into midas history

To run:
use bash shell
assuming MIDAS is installed in $HOME/packages/midas on linux, otherwise edit setup.sh and Makefile
run make to build feslow.exe
run source ./setup.sh
when starting this experiment for the very first time, load experiment settings from odb.xml: odbedit -c "load odb.xml"
run ./start_daq.sh
mlogger and mhttpd should now be running
connect to the midas status page at http://localhost:8080 (port number is set in start_daq.sh
start the example frontend from the "programs" page
observe event number of equipment "slow" is incrementing
go to the "Slow" equipment page (click on "Slow" on the midas status page)
observe numbers are changing when you refresh the web page
from the midas status page, go to "history" -> "slow" - observe history plot for "SLOW[2]" shows a sine wave
from shell, examine contents of history file: "mhdump *.hst"
study feslow.cxx

Enjoy,
K.O.
  748   16 Feb 2011 Konstantin OlchanskiBug ReportProblems with midas history SVN 4936
> 
> Do you mind giving little more detail? We might have the same issue, where we got
> complaints that midas history stops working after a certain time.
> 


Yes, please do supply more information. What problems do *you* see?


K.O.
  747   16 Feb 2011 Konstantin OlchanskiBug ReportProblems with midas history SVN 4936
It looks like email notices did not go the first time. Please read my replies below. K.O.

> > I have the following problems after updating to midas SVN 4936: the history 
> > system (web-page via mhttpd) seems to stop working. I checked the history files 
> > themself and they are indeed written, except that the events ID's are not the 
> > same anymore (I mean the ones defined under /Equipment/XXX/Common/Event ID), 
> > rather the mlogger seems to choose an ID by itself.
> 
> Yes, I found the problem - it was introduced around svn rev 4827 in September 2010.
> 
> It is fixed now, please do this:
> 1) update history_midas.c to latest svn rev 4979
> 1a) do NOT update any other files - update only history_midas.c
> 2) rebuild mlogger (it will do no harm and no good if you rebuild everything)
> 3) odbedit save odb.xml
> 4) in odb, remove /history/events and /history/tags (you can also set "/History/DisableTags" to "y")
> 5) restart mlogger
> 6) observe that odb /history/events now has event ids same as equipment ids
> 7) restart your frontend, observe that history file is growing
> 8) use mhdump to observe that history is now written with correct event id
> 9) go to mhttpd history plot, you should see the new data coming in. Plot history in the "1 year" scale, you 
> should see the old data and you should see a gap where data was written with wrong event id
> 10) I should still have  an mhrewrite program sitting somewhere that can change the event ids inside midas 
> history files, if you have many data files with wrong event id, let me know, I will find this program and tell you 
> how to use it to repair your data files.
> 
> > Currently the only way to get things working again was to recompile midas with 
> > adding -DOLD_HISTORY to the CFLAGS which is troublesome since it is likely to be 
> > forgotton with the next SVN update.
> 
> Yes, I am glad you found OLD_HISTORY, I kept it just for the case some breakage like this happens. I will still 
> keep it around until the dust settles.
> 
> > When looking into the SVN I have the  impression there is something going on concerning the history 
> system, however I couldn't find any documentation.
> 
> Yes, you found the right stuff, and it is partially documented. mlogger uses /History/Events to map history 
> event names (equipment names in your case) to history event ids. But in your case, the wrong event id has 
> been assigned by mlogger so nothing worked right. As a bonus, I now see inconsistency between event_id 
> code remaining in mlogger (which is not used) and event_id code in history_midas (which *is* used). I will be 
> straightening this stuff over the next few days.
> 
> I hope my correction to history_midas.cxx is good enough to get you going for now.
> 
> > What is the best practice for the future, in order not to run into any problems 
> > but still being able to look at the old history (also from within the web-page 
> > via mhttpd)?
> 
> Personally, I think that the midas history storage into binary files is not robust enough
> when facing changes to equipment and event ids, renaming and deleting of stuff, etc. There
> are other limitations, as well, i.e. the 16-bit history event id, etc.
> 
> The newly implemented SQL history storage (uses ODBC layer, MySQL supported, PgSQL partially
> implemented) does not have any of these problems and seems to work well enough
> for T2K/ND280. Sometimes MySQL history is even faster when making history plots in mhttpd.
> 
> I am now thinking about implementing SQL history storage in SQLite files, and it will not have
> any of these problems, too. Performance and robustness for database corruption remain a question, though.
> 
> K.O.
  746   16 Feb 2011 Konstantin OlchanskiBug ReportProblems with midas history SVN 4936
> I have the following problems after updating to midas SVN 4936: the history 
> system (web-page via mhttpd) seems to stop working. I checked the history files 
> themself and they are indeed written, except that the events ID's are not the 
> same anymore (I mean the ones defined under /Equipment/XXX/Common/Event ID), 
> rather the mlogger seems to choose an ID by itself.

Yes, I found the problem - it was introduced around svn rev 4827 in September 2010.

It is fixed now, please do this:
1) update history_midas.c to latest svn rev 4979
1a) do NOT update any other files - update only history_midas.c
2) rebuild mlogger (it will do no harm and no good if you rebuild everything)
3) odbedit save odb.xml
4) in odb, remove /history/events and /history/tags (you can also set "/History/DisableTags" to "y")
5) restart mlogger
6) observe that odb /history/events now has event ids same as equipment ids
7) restart your frontend, observe that history file is growing
8) use mhdump to observe that history is now written with correct event id
9) go to mhttpd history plot, you should see the new data coming in. Plot history in the "1 year" scale, you 
should see the old data and you should see a gap where data was written with wrong event id
10) I should still have  an mhrewrite program sitting somewhere that can change the event ids inside midas 
history files, if you have many data files with wrong event id, let me know, I will find this program and tell you 
how to use it to repair your data files.

> Currently the only way to get things working again was to recompile midas with 
> adding -DOLD_HISTORY to the CFLAGS which is troublesome since it is likely to be 
> forgotton with the next SVN update.

Yes, I am glad you found OLD_HISTORY, I kept it just for the case some breakage like this happens. I will still 
keep it around until the dust settles.

> When looking into the SVN I have the  impression there is something going on concerning the history 
system, however I couldn't find any documentation.

Yes, you found the right stuff, and it is partially documented. mlogger uses /History/Events to map history 
event names (equipment names in your case) to history event ids. But in your case, the wrong event id has 
been assigned by mlogger so nothing worked right. As a bonus, I now see inconsistency between event_id 
code remaining in mlogger (which is not used) and event_id code in history_midas (which *is* used). I will be 
straightening this stuff over the next few days.

I hope my correction to history_midas.cxx is good enough to get you going for now.

> What is the best practice for the future, in order not to run into any problems 
> but still being able to look at the old history (also from within the web-page 
> via mhttpd)?

Personally, I think that the midas history storage into binary files is not robust enough
when facing changes to equipment and event ids, renaming and deleting of stuff, etc. There
are other limitations, as well, i.e. the 16-bit history event id, etc.

The newly implemented SQL history storage (uses ODBC layer, MySQL supported, PgSQL partially
implemented) does not have any of these problems and seems to work well enough
for T2K/ND280. Sometimes MySQL history is even faster when making history plots in mhttpd.

I am now thinking about implementing SQL history storage in SQLite files, and it will not have
any of these problems, too. Performance and robustness for database corruption remain a question, though.

K.O.
  745   16 Feb 2011 Konstantin OlchanskiInfoNotes on MIDAS history
Some notes on the MIDAS history.

MIDAS documentation at
http://midas.psi.ch/htmldoc/F_History_logging.html
describes:

- midas equipment concepts
- midas equipment event ids
- midas data banks
- midas history concepts
- history records (correspond to data banks)
- history record ids (correspond to equipment ids)
- history tags (describe the structure
- describes the code path from the user read function through odb to the mlogger to the history file
- midas history file internal data format
- documents the tool for looking inside history files - mhdump

But some things remain unclear after reading the documentation - where are the history definitions 
saved? what happens if an equipment is deleted or renamed? what's all the mumbling about 
/History/Events and /History/Tags? what's this /History/PerVariableHistory?

As I go through my review of the MIDAS history code, I will attempt to clarify some of this information.

1) PerVariableHistory.

The default value of 0 is intended to operate the midas history in "traditional" mode. In this mode:
- there is one history record for each equipment
- history record id is equal to the equipment id
- /History/Events and /History/Tags are not required and can be safely deleted

The downside of this history mode is that there is only one history record per equipment. If some 
equipment has many banks not all of which are updated all at the same time, every time one bank is 
updated, data for all banks is written to the history file, even if data in all those other banks had not 
changed. The result is undesired duplication of data in midas history files. In turn, this leads to slow 
down while making history plots (mhttpd has to read more data from bigger data files, which takes time) 
and for long running experiments may pose problems with disk space for storing history files.

In addition, when logging history data into an SQL database, each history record is mapped into an SQL 
table, so all variables from all banks of an equipment end up in the same SQL table - and in addition to 
data duplication described above, a data presentation problem is created - database users and 
administrators dislike having SQL tables with "too many" columns!

To solve both problems - reduce data duplication and avoid creating over-large SQL tables - per-
variable history has been implemented.

to be continued...

K.O.
  744   15 Feb 2011 Konstantin OlchanskiBug Fixmlogger stop run on disk full!
The mlogger has a function for detecting when the output disk becomes full - when this condition is 
detected, the run should be stopped. But this did not work if disk is already full and the user tries to start 
a run - the "disk full?" check happened too early and the attempt to stop the run was not succeeding 
because the original start-run transition is still running. Now if "disk full" condition is detected, mlogger 
tries to stop the run every 10 seconds until the run is finally stopped (or dies because disk is full).

mlogger.c svn rev 4976
K.O.
  743   15 Feb 2011 Konstantin OlchanskiBug Reportfixed. odb corruption, odb race condition?
> Solution shall follow quickly, I have been hunting this deadlock for the last couple of weeks...

Over the last couple of days I made a series of commits to odb.c and midas.c to implement a buffer-based cm_msg()
and fix the latest deadlock problem, also to help with the race conditions in client creation and cleanup.

My torture test runs okey in my mac now, one remaining problem is spurious client removal caused
by semaphore starvation - I see 2-3-7-10 sec wait times for semaphores - probably caused by some
kind of unfairness in the MacOS SysV semaphore implementation (in a "fair" semaphore implementation,
the process that waited the longest would be woken up the first and one would never see semaphore wait
times measured in seconds). Probably worth investigating fairness of MacOS posix semaphores. On LInux
things are probably different and under normal running conditions one should not see any semaphore starvation.

I will be doing extensive tests of this update at TRIUMF, but I do not expect any problems. If you use this
version and see any anomalies, please report them as replies to this message or email me directly.

svn rev 4976
K.O.
  742   13 Feb 2011 Lee PoolBug ReportProblems with midas history SVN 4936
> I have the following problems after updating to midas SVN 4936: the history 
> system (web-page via mhttpd) seems to stop working. I checked the history files 
> themself and they are indeed written, except that the events ID's are not the 
> same anymore (I mean the ones defined under /Equipment/XXX/Common/Event ID), 
> rather the mlogger seems to choose an ID by itself.
> Currently the only way to get things working again was to recompile midas with 
> adding -DOLD_HISTORY to the CFLAGS which is troublesome since it is likely to be 
> forgotton with the next SVN update. When looking into the SVN I have the 
> impression there is something going on concerning the history system, however
> I couldn't find any documentation.
> What is the best practice for the future, in order not to run into any problems 
> but still being able to look at the old history (also from within the web-page 
> via mhttpd)?

Hi... 

Do you mind giving little more detail? We might have the same issue, where we got
complaints that midas history stops working after a certain time.

L
  741   11 Feb 2011 Konstantin OlchanskiBug Reportfixed. odb corruption, odb race condition?
> > 
> > The only remaining problem when running my script is some kind of deadlock between the ODB and SYSMSG semaphores...
> > 
> 
> For now, I am happy that we no longer corrupt ODB (nor deadlock) ...
>

Found one more deadlock between ODB and SYSMSG semaphores, this time through cm_watchdog():

If cm_watchdog somehow runs while we are holding the ODB semaphore, it will eventually try to lock SYSMSG (through bm_cleanup & co) in
violation of our semaphore locking order. If at the same time another application tries to lock stuff using the correct order (SYSMSG first, ODB last),
the two programs will deadlock (wait for each other forever). I presently have two copies of gdb attached to two copies of odbedit
waiting for each other in a deadlock through this cm_watchdog scenario...

Solution shall follow quickly, I have been hunting this deadlock for the last couple of weeks...

K.O.
  740   17 Jan 2011 Andreas SuterBug ReportProblems with midas history SVN 4936
I have the following problems after updating to midas SVN 4936: the history 
system (web-page via mhttpd) seems to stop working. I checked the history files 
themself and they are indeed written, except that the events ID's are not the 
same anymore (I mean the ones defined under /Equipment/XXX/Common/Event ID), 
rather the mlogger seems to choose an ID by itself.
Currently the only way to get things working again was to recompile midas with 
adding -DOLD_HISTORY to the CFLAGS which is troublesome since it is likely to be 
forgotton with the next SVN update. When looking into the SVN I have the 
impression there is something going on concerning the history system, however
I couldn't find any documentation.
What is the best practice for the future, in order not to run into any problems 
but still being able to look at the old history (also from within the web-page 
via mhttpd)?
  739   29 Dec 2010 Konstantin OlchanskiBug Reportfixed. odb corruption, odb race condition?
> 
> The only remaining problem when running my script is some kind of deadlock between the ODB and SYSMSG semaphores...
> 


I committed changes to odb.c and midas.c fixing a number of places that could corrupt ODB and SYSMSG data, and fixing a number of deadlocks. Without these 
changes, on my Mac, MIDAS will reliably corrupt ODB or deadlock while running my odbedit fork-bomb torture test script. These changes still need to be tested on 
Linux (but I do not expect any problems).

Because my changes do not fix the original race condition in client creation/removal/cleanup, you may still occasionally see messages like this:
13:35:14 [ODBEdit24,ERROR] [odb.c:2112:db_find_key,ERROR] hkey 169592 invalid key type 376
13:35:15 [ODBEdit28,ERROR] [odb.c:3268:db_get_value,ERROR] hkey 162072 entry "Name" is of type NULL, not STRING

For now, I am happy that we no longer corrupt ODB (nor deadlock) and I will work with Stefan on a permanent solution for this.

Special thanks go to the T2K/ND280 experiment, specifically, to Tim Nicholls and to the unnamed person who emailed me their script that executes many odbedit 
commands to setup midas history plots.


svn rev 4930
K.O.


P.S. Below is my torture test script, I usually run many of them in a sequence "./test1.perl >& xxx1; ./test1.perl >& xxx2; ... etc".

#!/usr/bin/perl -w
for (my $i=0; $i<50; $i++)
{
#my $cmd = "odbedit -c \'scl -w\' &";
#my $cmd = "odbedit -c \'ls -l /system/clients\' >& xxx$i &";
my $cmd = "odbedit -c \'ls -l /system/clients\' &";
system $cmd;
}
#end

svn rev 4930
K.O.
  738   29 Dec 2010 Konstantin OlchanskiBug Reportuse of nested locks in MIDAS
A "nested" or "recursive" lock is a special type of lock that permits a lock holder to lock the same resources again and again, without deadlocking on itself. They are 
very useful, but tricky to implement because most system lock primitives (SYSV semaphores, POSIX mutexes, etc) do not permit nested locks, so all the logic for 
"yes, I am the holder of the lock, yes, I can go ahead without taking it again" (plus the reverse on unlocking) has to be done "by hand". As ever, if implemented 
wrong or used wrong, Bad Things happen. Many people dislike nested locks because of the added complexity, but realistically, it is impossible to build a system 
that does not require nested locking at least somewhere.

MIDAS lock primitives - ss_semaphore_wait_for(), db_lock_database() and bm_lock_buffer() implement a type of nested locks.

ODB locks implemented in db_lock_database() fully support nested (recursive) locking and this feature is heavily used by the ODB library. Many ODB db_xxx() 
functions take the ODB lock, do something, then call another ODB function that also takes the ODB lock recursively. This works well.

Unfortunately, the ODB nested lock implementation is NOT thread-safe. (Unless one is connected through the mserver, in which case, db_xxx() functions ARE 
thread-safe because all ODB access is serialized by the mserver RPC mutex).

Event buffer locks implemented in bm_lock_buffer() rely on ss_semaphore_xxx() to provide nested locking.

ss_semaphore_wait_for() uses SYSV semaphores, which do not provide nested locking, except when called from cm_watchdog(). (keep reading).

Because bm_lock_buffer() does not implement nested locking, use of cm_msg() in buffer management code will lead to self-deadlock, as shown in the following 
stack trace, where bm_cleanup() is working on the SYSMSG buffer, locked it, then called cm_msg() which is now waiting on the SYSMSG lock, which we are holding 
ourselves.

(gdb) where
#0  0x00007fff87274e9e in semop ()
#1  0x0000000100024075 in ss_semaphore_wait_for (semaphore_handle=1179654, timeout=300000) at src/system.c:2280
#2  0x0000000100015292 in bm_lock_buffer (buffer_handle=<value temporarily unavailable, due to optimizations>) at src/midas.c:5386
#3  0x000000010000df97 in bm_send_event (buffer_handle=1, source=0x7fff5fbfd430, buf_size=<value temporarily unavailable, due to optimizations>, 
async_flag=0) at src/midas.c:6484
#4  0x000000010000e6f5 in cm_msg (message_type=2, filename=<value temporarily unavailable, due to optimizations>, line=4226, routine=0x10004559f 
"bm_cleanup", format=0x100045550 "Client '%s' on buffer '%s' removed by %s because process pid %d does not exist") at src/midas.c:722
#5  0x000000010001553c in bm_cleanup_buffer_locked (i=<value temporarily unavailable, due to optimizations>, who=0x100045f42 "bm_open_buffer", 
actual_time=869425784) at src/midas.c:4226
#6  0x00000001000167ee in bm_cleanup (who=0x100045f42 "bm_open_buffer", actual_time=869425784, wrong_interval=0) at src/midas.c:4286
#7  0x000000010001ae27 in bm_open_buffer (buffer_name=<value temporarily unavailable, due to optimizations>, buffer_size=100000, 
buffer_handle=0x10006e9ac) at src/midas.c:4550
#8  0x000000010001ae90 in cm_msg_register (func=0x100000c60 <process_message>) at src/midas.c:895
#9  0x0000000100009a13 in main (argc=3, argv=0x7fff5fbff3d8) at src/odbedit.c:2790

This example deadlock is not a normal code path - I accidentally exposed this deadlock sequence by adding some extra locking.

But in normal use, cm_msg() is called quite often from cm_watchdog() and as protection against this type of deadlock, MIDAS
ss_semaphore_xxx() has a special case that permits one level of nesting for locks called by code executed from cm_watchdog(). This is a very
clever implementation of partial nested locking.

So again, we are running into problems with cm_msg() - logically it should be at the very bottom of the system hierarchy - everybody calls it from their most 
delicate places, while holding various locks, etc - but instead, cm_msg() call the whole MIDAS system all over again - it calls ODB functions, event buffer functions, 
etc - mostly to open and to write into the SYSMSG buffer.

If you are reading this, I hope you are getting a better idea of the difference between textbook systems and systems that are used in the field to get some work 
done.

K.O.
  737   26 Dec 2010 Konstantin OlchanskiBug Reportrace condition and deadlock between ODB lock and SYSMSG lock in cm_msg()
> 
> The only remaining problem when running my script is some kind of deadlock between the ODB and SYSMSG semaphores...
> 


In theory, we understand how programs that use 2 semaphores to protect 2 shared resources can deadlock
if there are mistakes in how locks are used.

For example, consider 2 semaphores A and B and 2 concurrent
subroutines foo() and bar() running at exactly the same time:

foo() { lock(A); lock(B); do stuff; unlock(B); unlock(A); } and
bar() { lock(B); lock(A); do stuff; unlock(A); unlock(B); }

This system will deadlock immediately with foo() taking semaphore A, bar() taking semaphore B,
then foo() waiting for B and bar() waiting for A forever.

This situation can also be described as a race condition where foo() and bar() are racing each
other to get the semaphores, with the result depending on who gets there first
and, in this case, sometimes the result is deadlock.

In this example, the size of the race condition time window is the wall clock time
between actually locking both semaphores in the sequence "lock(X); lock(Y);". While
locking a semaphore is "instantaneous", the actual function lock() takes time to call
and execute, and this time is not fixed - it can change if the CPU takes a hardware
interrupt (quick), a page fault (when we may have to wait until data is read from the swap file)
or a scheduler interrupt (when we are outright stopped for milliseconds while the CPU runs
some other process).

In reality, subroutines foo() and bar() do not run at exactly the same time, so the probability
of deadlock will depend on how often foo() and bar() are executed, the size of the race condition time window,
the number of processes executing foo() and bar(), and the amount of background activity
like swapping, hardware interrupts, etc.

(Also note that on a single-cpu system, we will probably never see a deadlock between foo() and bar()
because they will never be running at the same time. But the deadlock is still there, waiting
for the lucky moment when the scheduler switches from foo() to bar() just at the wrong place).

There is more on deadlocks and stuff written at:
http://en.wikipedia.org/wiki/Deadlock
http://en.wikipedia.org/wiki/Race_condition

In case of MIDAS, the 2 semaphores are the ODB lock and the SYSMSG lock (also remember about locks
for the shared memory event buffers, SYSTEM, etc, but they seem to be unlikely to deadlock).

The function foo() is any ODB function (db_xxx) that locks ODB and then calls cm_msg() (which locks SYSMSG).

The function bar() is cm_msg() which locks SYSMSG and then calls some ODB db_xxx() function which tries to lock ODB.

(This is made more interesting by cm_watchdog() periodically called by alarm(), where we alternately
take SYSMSG (via bm_cleanup) and ODB locks.)

I think this establishes a theoretical possibility for MIDAS to deadlock on the ODB and SYSMSG semaphores.

In practice, I think we almost never see this deadlock because cm_msg() is not called very often, and during normal
operation, is almost never called from inside ODB functions holding the ODB lock - almost all calls to cm_msg from
ODB functions are made to report some kind of problem with the ODB internal structure, something that "never"
happens.

By "luck" I stumbled into this deadlock when doing the "odbedit" fork-bomb torture tests, when high ODB lock
activity is combined with high cm_msg() activity reporting clients starting and stopping, combined with a large
number of MIDAS clients running, starting and stopping.

So a deadlock I see within 1 minute of running the torture test, other lucky people will see after running an experiment
for 1 year, or 1 month, or 1 day, depending.

In theory, this deadlock can be removed by establishing a fixed order of taking locks. There will never be a deadlock
if we always take the SYSMSG lock first, then ask for the ODB lock.

In practice, it means that using cm_msg() while holding an ODB lock is automatically dangerous
and should be avoided if not forbidden.

And it does work. By refactoring a few places in client startup, shutdown and cleanup code, I made the deadlock "go away",
and my test script (posted in my first message) no longer deadlocks, even if I run hundreds of odbedit's at the same time.

Unfortunately,  it is impractical to audit and refactor all of MIDAS to completely remove this problem. MIDAS call graphs
are sufficiently complicated for making manual analysis of lock sequences infeasible and
I expect any automatic lock analysis tool will be defeated by the cm_watchdog() periodic interrupt.

An improvement is possible if we make cm_msg() safe for calling from inside the ODB db_xxx() function. Instead
of immediately sending messages to SYSMSG (requiring a SYSMSG lock), if ODB is locked, cm_msg() could
save the messages in a buffer, which would be flushed when the ODB lock is released. (This does not fix
all the other places that take ODB and SYSMSG locks in arbitrary order, but I think those places are not as
likely to deadlock, compared to cm_msg()).

However, now that I have greatly reduced the probability of deadlock in the client startup/shutdown/cleanup code,
maybe there is no urgency for changing cm_msg() - remember that if we do not call cm_msg() we will never deadlock -
and during normal operation, cm_msg() is almost never called.

Investigation completed, I will now cleanup, retest and commit my changes to midas.c and odb.c. Looking into this
and writing it up was a good intellectual exercise.

P.S. Also remember that there are locks for shared memory event buffers (SYSTEM, etc), but those do not involve
lock inversion leading to deadlock. I think all lock sequences are like this: SYSTEM->ODB, SYSTEM->SYSMSG->ODB,
there are no inverted sequences SYSMSG->SYSTEM or ODB->SYSTEM and the only deadlocking
sequence SYSTEM->ODB->SYSMSG, does not really involve the SYSTEM lock.

K.O.
ELOG V3.1.4-2e1708b5