Back Midas Rome Roody Rootana
  Midas DAQ System, Page 29 of 45  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
Entry  25 Aug 2011, Francesco Prelz, Forum, 64-bit integer support in MIDAS 
Hi,

I've been doing some preliminary work to use at least the MIDAS
SQL history component for a new CERN experiment (Aegis). I wonder
whether there is any plan to support 64-bit signed/unsigned integer data types
in MIDAS. time_t on 64-bit architectures is actually signed 64-bit
(the 'easy' way to work around the 2038 crisis), and this may be enough to
cause problems.

Thanks.
Francesco Prelz
INFN Milano
Entry  11 Jul 2011, Konstantin Olchanski, Info, Make "STOP" run transition always succeed 
Over the years, there was some back-and-forth changes in what happens to run transitions when some 
of the participants misbehave (do not respond to RPC calls, timeout, crash, etc).

The very original behaviour was to ignore all errors. This resulted in user confusion when some clients 
would start, some would not, data from frontends that missed the transition did not arrive, etc.

So it was changed to fail the transition if any client misbehaves.

This left mlogger (who is usually the first one to see the TR_START transition) in a funny state - output 
file is open, etc, but there is no run active. This was fixed by adding a TR_STARTABORT transition to tell 
mlogger, event builder & co that the just started run did not start after all.

Also at some point code was added to forcefully kill clients that do not respond to run transitions (do 
not respond to RPC, timeout, etc).

Recently, it was observed how during unattended overnight operation of a MIDAS DAQ system, with the 
logger set to "auto restart", some unnecessary clients misbehave during the run stop transition, and 
prevent the run from stopping and restarting. The user comes in the morning and is unhappy that data 
taking stopped some time during the night.

midas.c svn rev 5136 changes the TR_STOP transition to always succeed, even if some clients had 
transition errors. If these clients are unnecessary for normal operation of the DAQ, the following run 
"auto restart" will continue taking data. If those were important clients, data taking will continue the 
best it can - it *is* unattended operation - nobody is looking - but users can always setup alarms for 
checking that important clients are always running during data taking. (For very important clients, one 
can setup alarms to send email, send SMS messages, etc).

K.O.
Entry  27 Jun 2011, Konstantin Olchanski, Info, midas shared memory changes 
A number of changes were made to the midas shared memory implementation for
Linux and MacOS:

1) SysV or POSIX shared memory compile-type choice is removed. Both shared
memory types are compiled-in and are selected at run time.
2) the shared memory type used by an experiment is recorded in the file
.SHM_TYPE.TXT. Currently implemented are "POSIXv2_SHM" (the new default for new
experiments), "POSIX_SHM", "MMAP_SHM" and "SYSV_SHM". (see system.c) (MMAP_SHM
is fully functional but is not recommended). The POSIXv2_SHM uses an improved
filename scheme (on Linux, see "ls -l /dev/shm") and permits multiple
experiments to coexist on a MacOS computer (where there is a severe limit on
shared memory filename length).
3) following a number of mishaps where "odbedit" has been run on the wrong
computer (causing havoc with ODB and .xxx.SHM files), for each experiment, the
hostname of the computer where the ODB shared memory is meant to reside is now
recorded in the file .SHM_HOST.TXT. Typically, this is the machine running
mserver, mhttpd and mlogger. If some client is accidentally started on the wrong
machine or if MIDAS_SERVER_HOST is accidentally left undefined, MIDAS will now
print a stern message reporting the hostname mismatch, tell the user to use the
mserver and refuse to run. The user has the choice of starting the client on the
correct computer (as reported in the error message), using the mserver (start
client with -H flag) or edit/delete the .SHM_HOST.TXT file (full pathname is
reported by the error message).

With this update, MIDAS on MacOS becomes fully functional (before, only one
experiment could be used at a time).

svn rev 5105
K.O.
    Reply  05 Jul 2011, Konstantin Olchanski, Info, midas shared memory changes 
> 2) the shared memory type used by an experiment is recorded in the file .SHM_TYPE.TXT.

An error with creating the file .SHM_TYPE.TXT was corrected in system.c svn rev 5125 - if file did not exist, it is 
created correctly, but MIDAS reports "cannot connect to ODB". Second try works correctly because the file exists 
now.

> 3) the hostname of the computer where the ODB shared memory is meant to reside is now
> recorded in the file .SHM_HOST.TXT.

This is causing problems on mobile computers where "hostname" changes all the time (i.e. set according to 
DHCP on whatever network happens to be connected).

If you run into this problem, keep deleting .SHM_HOST.TXT or use this workaround: disable the hostname check 
by making the file .SHM_HOST.TXT empty (zero length).

K.O.
       Reply  10 Jul 2011, Konstantin Olchanski, Bug Fix, midas shared memory changes 
> > 2) the shared memory type used by an experiment is recorded in the file .SHM_TYPE.TXT.
> > 3) the hostname of the computer where the ODB shared memory is meant to reside is now
> > recorded in the file .SHM_HOST.TXT.

Due to a typo in src/system.c svn rev 5125, ss_shm_delete() did not work at all. This broke "odbedit -R", "odbedit -s 5000000" (to change ODB size), etc. 
Fixed in src/system.c svn rev 5134. (It is safe to update just tis one file to fix this problem).

Sorry for the inconvenience,
K.O.
          Reply  11 Jul 2011, Konstantin Olchanski, Bug Fix, midas shared memory changes 
> > > 2) the shared memory type used by an experiment is recorded in the file .SHM_TYPE.TXT.
> > > 3) the hostname of the computer where the ODB shared memory is meant to reside is now
> > > recorded in the file .SHM_HOST.TXT.


Because the mserver did not setup correct experiment name and path, POSIX shared memory did not work at all when used with the mserver. Fixed in mserver.c rev 5135


Sorry for the inconvenience,
K.O.
Entry  05 Jul 2011, Konstantin Olchanski, Bug Report, MacOS network socket timeouts non-functional 
It turns out that because of differences between select() syscall implementation between UNIX (MacOS, 
maybe BSD) and Linux,  network socket timeouts do not work.

This affects timeouts during run transitions (transition calls to dead clients do not timeout), maybe other 
places.

I am looking into fixing this. The main difficulty is with UNIX select() not updating the timeout parameter 
when it is interrupted by the MIDAS watchdog alarm signal. Linux select() subtracts the elapsed time from 
the timeout value and this code from system.c works correctly: while (1) { status = select(..., &timeout); if 
(status==0) break; } (value of timeout becomes smaller each time), while on MacOS it loops forever (value 
of timeout does not change).
K.O.
Entry  27 Jun 2011, Konstantin Olchanski, Info, mlogger lock for runNNN.mid.gz files 
By popular request, Stefan R. implemented a locking scheme for mlogger output files.

To use this function, set the mlogger ODB /Logger/Channels/NNN/Settings/Filename
to ".run%05dsub%05d.mid.gz" (note the leading dot).

In this mode, active output files will have a filename with a leading dot
(.run00001sub00001.mid.gz) while the file is being written to. After the file is
closed, it is renamed and the leading dot is removed.

To use this function with the lazylogger, please set ODB
"/Lazy/Foo/Settings/Filename format" to "run*.mid.gz,run*.xml" (note the leading
text "run"). Set "stay behind" to 0.

svn rev 5080 (or so, checking by Stefan R.)
K.O.
Entry  27 Jun 2011, Konstantin Olchanski, Info, updated mhttpd history "export" function 
The mhttpd history "export" function has been converted to the new midas history
interface and should now work for SQL-based history systems. In the process,
improvements by Eoin Butler (CERN AD-5/ALPHA) were merged - adding a UNIX
timestamp and a better text timestamp. Also now "export" outputs the actual
values from the history file - the scaling values from the definition of the
history plot panel are no longer applied.

Here is an example of the new file format:

Time, Timestamp, Run, Run State, SLOW
2011.06.21 15:45:21, 1308696321, 13292, 3, -89.1007

svn rev 5104
K.O.
Entry  24 Jun 2011, Exaos Lee, Suggestion, Build MIDAS debian packages using autoconf/automake. daq-midas_deb.tar.gzmdaq.py
Here is my story. I deployed several Debian Linux boxes as the DAQ systems in our lab. But I feel it's boring to build and install midas and its related softwares (such as root) on each box. So I need a local debian software repository and put midas and its related packages in it. First of all, I need a midas debian package. After a week's study and searching, I finally finished the job. Hope you feel it useful.

All the work is attached as "daq-midas_deb.tar.gz". The detail is followed. I also created several debian packages. But it's too large to be uploaded. I havn't my own site accessible from internet. So, if you need the debian packages, please give me an accessible ftp or other similar service, then I can upload them to you.

First, I use autoconf/automake to rewrite the building system of MIDAS. You can check it this way:
1. Untar daq-midas_deb.tar.gz somewhere, assumming ~/Temp.
2. cd ~/Temp/daq-midas
3. svn co -r 5065 svn+ssh://svn@savannah.psi.ch/repos/meg/midas/trunk midas
4. svn co -r 68 svn+ssh://svn@savannah.psi.ch/repos/meg/mxml/trunk mxml
5. cp -rvp debian/autoconf/* ./
6. ./configure --help
7. ./configure <--options>
8. make && make install

Then, I created the debian packages based on the new building files. You need to install root-system package from http://lcg-heppkg.web.cern.ch/lcg-heppkg/debian/. You can build debs this way:
1. untar daq-midas_deb.tar.gz somewhere, assuming ~/Temp.
2. cd ~/Temp/daq-midas
3. svn co -r 5065 svn+ssh://svn@savannah.psi.ch/repos/meg/midas/trunk midas
4. svn co -r 68 svn+ssh://svn@savannah.psi.ch/repos/meg/mxml/trunk mxml
5. dpkg-buildpackage -b -us -uc

I split the package into serverals parts:
  • daq-midas-doc -- The documents and references
  • daq-midas-root -- the midas runtime library and utilities built with root
  • daq-midas-noroot -- the midas runtime library and utilities built without root
  • daq-midas-dev-root -- the midas devel files (headers, objects, drivers, examples) built with root
  • daq-midas-dev-noroot -- the midas devel files (headers, objects, drivers, examples) built without root

Here are the installation:
  • executalbes -- /usr/lib/daq-midas/bin
  • library and objs -- /usr/lib/daq-midas/lib
  • headers -- /usr/lib/daq-midas/include
  • sources and drivers -- /usr/share/daq-midas/
  • docs and examples -- /usr/share/doc/daq-midas
  • mdaq-config -- /usr/bin/mdaq-config

I add an auto-generated shell script -- mdaq-config. It behaves just like "root-config". You can get midas build flags and link flags this way:
gcc `mdaq-config --cflags` -c -o myfe.o myfe.c
gcc `mdaq-config --libs` -o myfe myfe.o `mdaq-config --libdir`/mfe.o

Bugs and suggestions are welcomed.

P.S. Based on debian packages, I am planing to write another script, "mdaq.py":
  • each midas experiment will be configured in a file named "mdaq.yaml"
  • mdaq.py reads the configure file and prepare the daq environment, just like "examples/experiment/start_daq.sh"
  • mdaq.py will handle "start/stop/restart/info" about the daq codes.
The attached "mdaq.py" is the old one.
    Reply  27 Jun 2011, Konstantin Olchanski, Suggestion, Build MIDAS debian packages using autoconf/automake. 
> I deployed several Debian Linux boxes as the DAQ systems in our lab. But I
feel it's boring to build and install midas and its related softwares (such as
root) on each box.


Our solution at TRIUMF is to install such packages on a shared NFS filesystem
visible to all client computers. This works well for ROOT and but MIDAS we found
it nearly impossible to keep MIDAS versions in sync between different projects
and expiments, so each experiment uses it's own copy of MIDAS, usually located
in the experiment home directory ($HOME/packages/midas). Because we often need
to make local modifications to MIDAS sources (Makefile, etc), we do not
"install" MIDAS into non-user-writable /usr/local & etc.


> I use autoconf/automake


The promise (premise) of autoconf/automake is to "hide" system dependencies. The
scripts are supposed to automatically probe the build environment and construct
an appropriate Makefile.

In practice, the autotool scripts always have bugs and incorrect assumptions
about the build environment and only work well for a few standardized systems
(RHEL and Debian derivatives) where the differences are so trivial that
autotools is an overkill and a normal Makefile is adequate for the job.

In my experience, as soon as I try to build an autotool-ized package on anything
that does not look like RHEL or Debian, autotool scripts explode and have to be
debugged and kludged by hand. Anybody who has ever done that would agree with me
that one would rather hack the ugliest Makefile than any of the  autotool
generated gibberish.

And of course autotools have never handled cross-compilation in any reasonable
way. Since we do cross-compile MIDAS (for VxWorks and embedded Linux, see "make
crosscompile") a Makefile is required and it so happens that the same Makefile
also works for normal Linux and MacOS, thank you very much.



> Here are the installation:
> [*] executalbes -- /usr/lib/daq-midas/bin
> [*] library and objs -- /usr/lib/daq-midas/lib


Is this in violation of the LSB (or LFS)? I though they mandate that files
controlled by package manager should be /usr/bin/odbedit, /usr/lib64/libmidas.a,
etc (/usr/bin/midas/odbedit no permitted).


> gcc `mdaq-config --cflags` -c -o myfe.o myfe.c


Please check if your config scripts correctly handle the "-m32" and "-m64" flags
- we frequently cross-compile 32-bit MIDAS executables on 64-bit machines.


K.O.
Entry  21 Jun 2011, Stefan Ritt, Info, New MIDAS sequencer seqtest.xml
A new sequencer for starting and stopping runs has been implemented. Although it is till kind of in a preliminary phase, it is usable, so I would like to share the syntax with you.

The sequencer runs inside mhttpd, and creates a new ODB subdirectory "/Sequencer". There is a new button on the status page called "Sequencer". In can run scripts in XML format, which reside on the server (where mhttpd is running). The sequencer is stateless, that means even if mhttpd is stopped and restarted, it resumes operation from where it has been stopped. Following statements are implemented:

  • <Comment>comment</Comment>
    a comment for this XML file, for information only

  • <ODBSet path="path">value</ODBSet>
    to set a value in the ODB

  • <ODBInc path="path">delta</ODBInc>
    to increment a value in the ODB

  • <RunDescription>Description</RunDescription>
    a run description which is stored under /Experiment/Run Parameters/Run Description.

  • <Transition>Start | Stop</Transition>
    to start or stop a run

  • <Loop n="n"> ... </Loop>
    to execute a loop n times. For infinite loops, "infinit" can be specified as n

  • <Wait for="events | ODBvalue | seconds" [path="ODB path"]>x</Wait>
    wait until a number of events is acquired (testing /Equipment/Trigger/Statistics/Events sent), or until a value in the ODB exceeds x, or wait for x seconds.

  • <Script [loop_counter="1"]>Script</Script>
    to call a script on the server side. Optionally, the loop counter(s) are passed to the script

Attached is a simple script which can be used as a starting point.
Entry  20 Jun 2011, Stefan Ritt, Info, Javascript ODB interface revised 
The Javascript interface to the ODB has been revised. This extends the capabilities of custom web pages requesting data from the ODB. By grouping several request together, the number of round-trips is minimized and the response time is reduced. Following functions are new or extended:

  • ODBGet(path[, format]): This functions works now also with subdirectories in the ODB. The command ODBGet('/Runinfo') returns for example:
    1
    1
    1024
    0
    0
    0
    Mon Jun 20 09:40:14 2011
    1308588014
    Mon Jun 20 09:40:46 2011
    1308588046
    

  • ODBGetRecord(path), ODBExtractRecord(key): While ODBGet can be used for subdirectories, an easier way is to use ODBGetRecord and ODBExtractRecord. The first function retrieves the subtree (record), while the second one can be used to extract individual items. Here is an example:
    result = ODBGetRecord('/Runinfo');
    run_number = ODBExtractRecord(result, 'Run number');
    start_time = ODBExtractRecord(result, 'Start time');
    

  • ODBMGet(paths[, callback, formats]): This function ("Multi-Get") can be used to obtain ODB values from different paths in one call. The ODB paths have to be supplied in an array, the result is again an array. An optional callback routine might be supplied for asynchronous operation. Optional formats might be supplied if the resulting number should be formatted in a specific way. Here is an example:
       var req = new Array();
       req[0] = "/Runinfo/Run number";
       req[1] = "/Equipment/Trigger/Statistics/Events sent";
       var result = ODBMGet(req);
       run_number = result[0];
       events_sent = result[1];
    

The new functions are implemented in mhttpd revision 5075.
Entry  17 Jun 2011, Jimmy Ngai, Forum, Cannot open input file (file too large?) 
Dear All,

I got a "Cannot open input file" error when I tried to analyze a .mid.gz file with 
size over 5 GB on a 32-bit Linux. The error traced back to gzopen() in mana.c 
where it returned NULL when opening the file. I understand that 32-bit Linux may 
not be able to handle files with size over 2 GB. I tried to add -
D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 to CFLAGS in the Makefile of MIDAS and 
the analyzer, but I still got the same error. Is there any workarounds that enable 
me to analyze large files on 32-bit systems?

p.s. The data file was also produced on a 32-bit Linux.

Thanks & Best Regards,

Jimmy
    Reply  20 Jun 2011, Jimmy Ngai, Forum, Cannot open input file (file too large?) 
Dear All,

Thanks Konstantin Olchanski for providing me a hint. The file can be opened now after I 
changed the line: 

file->gzfile = gzopen(file_name, "rb");

in function ma_open() in mana.c to the followings: 

INT fd = open(file_name, O_RDONLY | O_LARGEFILE);
if (fd <= 0)
   return NULL;

file->gzfile = gzdopen(fd, "rb");

No modifications to the Makefile is needed in this case.

Best Regards,
Jimmy


> Dear All,
> 
> I got a "Cannot open input file" error when I tried to analyze a .mid.gz file with 
> size over 5 GB on a 32-bit Linux. The error traced back to gzopen() in mana.c 
> where it returned NULL when opening the file. I understand that 32-bit Linux may 
> not be able to handle files with size over 2 GB. I tried to add -
> D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 to CFLAGS in the Makefile of MIDAS and 
> the analyzer, but I still got the same error. Is there any workarounds that enable 
> me to analyze large files on 32-bit systems?
> 
> p.s. The data file was also produced on a 32-bit Linux.
> 
> Thanks & Best Regards,
> 
> Jimmy
Entry  17 Jun 2011, Konstantin Olchanski, Forum, ladd00.triumf.ca https ssl certificate update 
The HTTPS SSL certificate on ladd00.triumf.ca has been updated. Same as the old
certificate, the new one is self-signed and your web browser may complain about
that and ask you to "save a security exception".

When you save the new certificate, you can verify that you are connected to the
real ladd00.triumf.ca by comparing the "SHA1 fingerprint" reported by your web
browser to the one given below (as reported by "svn update"):

Certificate information:
 - Hostname: ladd00.triumf.ca
 - Valid: from Jun 17 23:36:35 2011 GMT until Jun 16 23:36:35 2012 GMT
 - Issuer: DAQ, TRIUMF, Vancouver, BC, CA
 - Fingerprint: 2a:be:9f:9f:70:d4:dc:72:9f:63:bf:4f:fe:c0:2c:8f:a8:29:f2:f1

K.O.
Entry  30 Mar 2011, Exaos Lee, Forum, How large does "bank32" support? 
Reading an FADC buffer often needs large buffer size, especially while several
FADCs work together. I want to know how large a bank32 can support.
    Reply  30 Mar 2011, Stefan Ritt, Forum, How large does "bank32" support? 
> Reading an FADC buffer often needs large buffer size, especially while several
> FADCs work together. I want to know how large a bank32 can support.

The "32" in bank32 means 32-bits, so the bank holds 2^32=4 GBytes, hope that is enough for your ADC. 
The "normal" bank has only a 16-bit header, so it can hold only 64 kBytes. But for small banks, the overhead 
is therefore smaller.
    Reply  15 Apr 2011, Konstantin Olchanski, Forum, How large does "bank32" support? 
> Reading an FADC buffer often needs large buffer size, especially while several
> FADCs work together. I want to know how large a bank32 can support.

Limitations in order:

- bank32 size is limited to a 32 bit integer size (about 4000 Gbytes)
- bank size is limited by event size
- event size in a midas mfe.c based frontend is limited to the value of
max_event_size set by the user
- maximum event size that can go through the MIDAS event buffer system is limited
to ODB value /Experiment/MAX_EVENT_SIZE (MAX_EVENT_SIZE in midas.h does not do
anything now)
- maximum event size is limited to *half* the size of the SYSTEM shared memory
event buffer (or any other buffers that the event has to go through)
- default size of the SYSTEM buffer is 8 Mbytes set by ODB /Experiment/Buffer
sizes/SYSTEM. This limits maximum event size to about 4 Mbytes.
- size of SYSTEM buffer can be increased to arbitrary size, but in practice no
bigger than the amount of computer physical memory minus space needed for running
the frontend program and the mlogger, which also allocate buffer space to hold 1
event of maximum size.

So for a computer with 8 Gbytes of RAM, you can make the SYSTEM buffer size 4
GBytes, set ODB MAX_EVENT_SIZE to 1 Gbyte, and in theory, you should be able to
write 1 Gbyte events from your frontend to disk.

In practice, I think the biggest events I have seen go through a MIDAS system are
non-compressed waveforms in the T2K/ND280 FGD and TPC detectors, about 4 Mbytes of
data per event.

Other considerations (rules of thumb):

1) the SYSTEM event buffer should be big enough to hold 10-100 events.
2) the SYSTEM event buffer should be big enough to hold about 5-10 seconds of data
- i.e. if your event size is 1 Mbyte and data rate is 1 Hz, 10 seconds of data
will be 1Mbyte*1Hz*10sec = 10 Mbytes.

This is because the SYSTEM buffer decouples the real-time activity of the frontend
program from non-real-time activity of writing data to storage devices.

K.O.
Entry  15 Apr 2011, Jonathan Toebbe, Forum, Can't get example frontend to talk to khyt1331 kernel driver 
I'm brand new to MIDAS, and C system programming in general, so please be
gentle. I've compiled and installed MIDAS 2.3.0 on Ubuntu 10.04 LTS. I've built
the kernel driver, khyt1331.ko and installed it. It appears to be working, since
the camactest and esonetest programs included with the driver work just fine.

So I attempted to build the example experiment distributed with MIDAS, with the
following changes to the Makefile:

DRV_DIR   = $(MIDASSYS)/drivers/kernel/khyt1331_26

and

DRIVER = camac

The programs build without error but when I try to start the frontend, I get:

$ ./frontend
Frontend name          :     CSM-Nuclear Portable DAQ Frontend
Event buffer size      :     1000000
User max event size    :     10000
User max frag. size    :     5242880
# of events per buffer :     100

Connect to experiment...
*** buffer overflow detected ***: ./frontend terminated
======= Backtrace: =========
/lib/tls/i686/cmov/libc.so.6(__fortify_fail+0x50)[0x6de390]
/lib/tls/i686/cmov/libc.so.6(+0xe12ca)[0x6dd2ca]
/lib/tls/i686/cmov/libc.so.6(__strcpy_chk+0x44)[0x6dc644]
./frontend[0x805611f]
./frontend[0x806f656]
./frontend[0x8053d82]
/lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe6)[0x612bd6]
./frontend[0x804bb81]
======= Memory map: ========
00110000-0012d000 r-xp 00000000 08:05 7471187    /lib/libgcc_s.so.1
0012d000-0012e000 r--p 0001c000 08:05 7471187    /lib/libgcc_s.so.1
0012e000-0012f000 rw-p 0001d000 08:05 7471187    /lib/libgcc_s.so.1
00264000-00277000 r-xp 00000000 08:05 7603242    /lib/tls/i686/cmov/libnsl-2.11.1.so
00277000-00278000 r--p 00012000 08:05 7603242    /lib/tls/i686/cmov/libnsl-2.11.1.so
00278000-00279000 rw-p 00013000 08:05 7603242    /lib/tls/i686/cmov/libnsl-2.11.1.so
00279000-0027b000 rw-p 00000000 00:00 0 
002db000-002dd000 r-xp 00000000 08:05 7603265   
/lib/tls/i686/cmov/libutil-2.11.1.so
002dd000-002de000 r--p 00001000 08:05 7603265   
/lib/tls/i686/cmov/libutil-2.11.1.so
002de000-002df000 rw-p 00002000 08:05 7603265   
/lib/tls/i686/cmov/libutil-2.11.1.so
003b1000-003c6000 r-xp 00000000 08:05 7603257   
/lib/tls/i686/cmov/libpthread-2.11.1.so
003c6000-003c7000 r--p 00014000 08:05 7603257   
/lib/tls/i686/cmov/libpthread-2.11.1.so
003c7000-003c8000 rw-p 00015000 08:05 7603257   
/lib/tls/i686/cmov/libpthread-2.11.1.so
003c8000-003ca000 rw-p 00000000 00:00 0 
004ea000-004f1000 r-xp 00000000 08:05 7603261    /lib/tls/i686/cmov/librt-2.11.1.so
004f1000-004f2000 r--p 00006000 08:05 7603261    /lib/tls/i686/cmov/librt-2.11.1.so
004f2000-004f3000 rw-p 00007000 08:05 7603261    /lib/tls/i686/cmov/librt-2.11.1.so
005fb000-005fc000 r-xp 00000000 00:00 0          [vdso]
005fc000-0074f000 r-xp 00000000 08:05 7603231    /lib/tls/i686/cmov/libc-2.11.1.so
0074f000-00750000 ---p 00153000 08:05 7603231    /lib/tls/i686/cmov/libc-2.11.1.so
00750000-00752000 r--p 00153000 08:05 7603231    /lib/tls/i686/cmov/libc-2.11.1.so
00752000-00753000 rw-p 00155000 08:05 7603231    /lib/tls/i686/cmov/libc-2.11.1.so
00753000-00756000 rw-p 00000000 00:00 0 
00783000-00796000 r-xp 00000000 08:05 7471302    /lib/libz.so.1.2.3.3
00796000-00797000 r--p 00012000 08:05 7471302    /lib/libz.so.1.2.3.3
00797000-00798000 rw-p 00013000 08:05 7471302    /lib/libz.so.1.2.3.3
008ab000-008c6000 r-xp 00000000 08:05 7471129    /lib/ld-2.11.1.so
008c6000-008c7000 r--p 0001a000 08:05 7471129    /lib/ld-2.11.1.so
008c7000-008c8000 rw-p 0001b000 08:05 7471129    /lib/ld-2.11.1.so
008e4000-00908000 r-xp 00000000 08:05 7603239    /lib/tls/i686/cmov/libm-2.11.1.so
00908000-00909000 r--p 00023000 08:05 7603239    /lib/tls/i686/cmov/libm-2.11.1.so
00909000-0090a000 rw-p 00024000 08:05 7603239    /lib/tls/i686/cmov/libm-2.11.1.so
08048000-0809d000 r-xp 00000000 08:11 20318114   /home/midas/online/test/frontend
0809d000-0809e000 r--p 00055000 08:11 20318114   /home/midas/online/test/frontend
0809e000-080a3000 rw-p 00056000 08:11 20318114   /home/midas/online/test/frontend
080a3000-080c5000 rw-p 00000000 00:00 0 
0835f000-08380000 rw-p 00000000 00:00 0          [heap]
b7881000-b7884000 rw-p 00000000 00:00 0 
b7893000-b7895000 rw-p 00000000 00:00 0 
bf938000-bf94d000 rw-p 00000000 00:00 0          [stack]
Aborted

Please help me figure out what's going wrong!

Thank you,
Jon
Entry  28 Feb 2011, Konstantin Olchanski, Info, javascript example experiment example.html
I just committed a MIDAS example for using most mhttpd html and javascript functions: ODBGet(), 
ODBSet(), ODBRpc() & co.

Please checkout .../midas/examples/javascript1, and follow instructions in the README file.

For your enjoyment, the example html file is attached to this message.

(to use the function ODBRpc_rev1 - the midas rpc call that returns data to the web page - you need 
mhttpd svn rev 4994 committed today. Other functions should work with any revision of mhttpd)

svn rev 4992.
K.O.
Entry  17 Jan 2011, Andreas Suter, Bug Report, Problems with midas history SVN 4936 
I have the following problems after updating to midas SVN 4936: the history 
system (web-page via mhttpd) seems to stop working. I checked the history files 
themself and they are indeed written, except that the events ID's are not the 
same anymore (I mean the ones defined under /Equipment/XXX/Common/Event ID), 
rather the mlogger seems to choose an ID by itself.
Currently the only way to get things working again was to recompile midas with 
adding -DOLD_HISTORY to the CFLAGS which is troublesome since it is likely to be 
forgotton with the next SVN update. When looking into the SVN I have the 
impression there is something going on concerning the history system, however
I couldn't find any documentation.
What is the best practice for the future, in order not to run into any problems 
but still being able to look at the old history (also from within the web-page 
via mhttpd)?
    Reply  13 Feb 2011, Lee Pool, Bug Report, Problems with midas history SVN 4936 
> I have the following problems after updating to midas SVN 4936: the history 
> system (web-page via mhttpd) seems to stop working. I checked the history files 
> themself and they are indeed written, except that the events ID's are not the 
> same anymore (I mean the ones defined under /Equipment/XXX/Common/Event ID), 
> rather the mlogger seems to choose an ID by itself.
> Currently the only way to get things working again was to recompile midas with 
> adding -DOLD_HISTORY to the CFLAGS which is troublesome since it is likely to be 
> forgotton with the next SVN update. When looking into the SVN I have the 
> impression there is something going on concerning the history system, however
> I couldn't find any documentation.
> What is the best practice for the future, in order not to run into any problems 
> but still being able to look at the old history (also from within the web-page 
> via mhttpd)?

Hi... 

Do you mind giving little more detail? We might have the same issue, where we got
complaints that midas history stops working after a certain time.

L
       Reply  16 Feb 2011, Konstantin Olchanski, Bug Report, Problems with midas history SVN 4936 
> 
> Do you mind giving little more detail? We might have the same issue, where we got
> complaints that midas history stops working after a certain time.
> 


Yes, please do supply more information. What problems do *you* see?


K.O.
          Reply  16 Feb 2011, Lee Pool, Bug Report, Problems with midas history SVN 4936 
> > 
> > Do you mind giving little more detail? We might have the same issue, where we got
> > complaints that midas history stops working after a certain time.
> > 
> 
> 
> Yes, please do supply more information. What problems do *you* see?
> 
> 
> K.O.

Hi.

uhm, mine might be completely unrelated to this, but it just so happened that the rev.
4936 was one that was used in a recent experiment, in which there was complaints about
the responsiveness of the history plots. The history plots would take up to 30 seconds
to respond, if the run was about 30-40 minutes old. When the run is about  < 10 minutes
old , the history plot was responsive to within 1-2 seconds.

I received rather limited information regarding this problem. So hence my apprehension
on stating it as a *problem* or bug. It could be something related to hardware/beam etc.

Lee
             Reply  17 Feb 2011, Stefan Ritt, Bug Report, Problems with midas history SVN 4936 
> uhm, mine might be completely unrelated to this, but it just so happened that the rev.
> 4936 was one that was used in a recent experiment, in which there was complaints about
> the responsiveness of the history plots. The history plots would take up to 30 seconds
> to respond, if the run was about 30-40 minutes old. When the run is about  < 10 minutes
> old , the history plot was responsive to within 1-2 seconds.

Ah, that rings a bell. How big are your history files on disk? How much RAM do you have?

What I see in our experiment is that linux buffers everything I write to disk in a cache 
located in RAM. But this cache is limited, so after a certain time it's overwritten. Now 
this is handled by the OS, the application (mlogger in this case) has no influence on it. 
Let's say you write 5 MB/minute of history, and your cache is 50 MB large. Then after 10 
minutes you can still read the history data from the RAM cache which is ~10x faster than 
your disk. But your older history data (30-40 min) is flushed out of the cache and has to be 
re-read from disk. A typical symptom of this is that the first time you display this it 
takes maybe 30 seconds, but if you do a "reload" of your page it goes much faster. In that 
case the contents is cached again in RAM. If you observe this, you can almost be certain 
that you see th "too small RAM cache" problem. In that case just add RAM and things should 
run better (I use 16 GByte in my machine).

Best regards,

  Stefan
    Reply  16 Feb 2011, Konstantin Olchanski, Bug Report, Problems with midas history SVN 4936 
> I have the following problems after updating to midas SVN 4936: the history 
> system (web-page via mhttpd) seems to stop working. I checked the history files 
> themself and they are indeed written, except that the events ID's are not the 
> same anymore (I mean the ones defined under /Equipment/XXX/Common/Event ID), 
> rather the mlogger seems to choose an ID by itself.

Yes, I found the problem - it was introduced around svn rev 4827 in September 2010.

It is fixed now, please do this:
1) update history_midas.c to latest svn rev 4979
1a) do NOT update any other files - update only history_midas.c
2) rebuild mlogger (it will do no harm and no good if you rebuild everything)
3) odbedit save odb.xml
4) in odb, remove /history/events and /history/tags (you can also set "/History/DisableTags" to "y")
5) restart mlogger
6) observe that odb /history/events now has event ids same as equipment ids
7) restart your frontend, observe that history file is growing
8) use mhdump to observe that history is now written with correct event id
9) go to mhttpd history plot, you should see the new data coming in. Plot history in the "1 year" scale, you 
should see the old data and you should see a gap where data was written with wrong event id
10) I should still have  an mhrewrite program sitting somewhere that can change the event ids inside midas 
history files, if you have many data files with wrong event id, let me know, I will find this program and tell you 
how to use it to repair your data files.

> Currently the only way to get things working again was to recompile midas with 
> adding -DOLD_HISTORY to the CFLAGS which is troublesome since it is likely to be 
> forgotton with the next SVN update.

Yes, I am glad you found OLD_HISTORY, I kept it just for the case some breakage like this happens. I will still 
keep it around until the dust settles.

> When looking into the SVN I have the  impression there is something going on concerning the history 
system, however I couldn't find any documentation.

Yes, you found the right stuff, and it is partially documented. mlogger uses /History/Events to map history 
event names (equipment names in your case) to history event ids. But in your case, the wrong event id has 
been assigned by mlogger so nothing worked right. As a bonus, I now see inconsistency between event_id 
code remaining in mlogger (which is not used) and event_id code in history_midas (which *is* used). I will be 
straightening this stuff over the next few days.

I hope my correction to history_midas.cxx is good enough to get you going for now.

> What is the best practice for the future, in order not to run into any problems 
> but still being able to look at the old history (also from within the web-page 
> via mhttpd)?

Personally, I think that the midas history storage into binary files is not robust enough
when facing changes to equipment and event ids, renaming and deleting of stuff, etc. There
are other limitations, as well, i.e. the 16-bit history event id, etc.

The newly implemented SQL history storage (uses ODBC layer, MySQL supported, PgSQL partially
implemented) does not have any of these problems and seems to work well enough
for T2K/ND280. Sometimes MySQL history is even faster when making history plots in mhttpd.

I am now thinking about implementing SQL history storage in SQLite files, and it will not have
any of these problems, too. Performance and robustness for database corruption remain a question, though.

K.O.
       Reply  16 Feb 2011, Konstantin Olchanski, Bug Report, Problems with midas history SVN 4936 
It looks like email notices did not go the first time. Please read my replies below. K.O.

> > I have the following problems after updating to midas SVN 4936: the history 
> > system (web-page via mhttpd) seems to stop working. I checked the history files 
> > themself and they are indeed written, except that the events ID's are not the 
> > same anymore (I mean the ones defined under /Equipment/XXX/Common/Event ID), 
> > rather the mlogger seems to choose an ID by itself.
> 
> Yes, I found the problem - it was introduced around svn rev 4827 in September 2010.
> 
> It is fixed now, please do this:
> 1) update history_midas.c to latest svn rev 4979
> 1a) do NOT update any other files - update only history_midas.c
> 2) rebuild mlogger (it will do no harm and no good if you rebuild everything)
> 3) odbedit save odb.xml
> 4) in odb, remove /history/events and /history/tags (you can also set "/History/DisableTags" to "y")
> 5) restart mlogger
> 6) observe that odb /history/events now has event ids same as equipment ids
> 7) restart your frontend, observe that history file is growing
> 8) use mhdump to observe that history is now written with correct event id
> 9) go to mhttpd history plot, you should see the new data coming in. Plot history in the "1 year" scale, you 
> should see the old data and you should see a gap where data was written with wrong event id
> 10) I should still have  an mhrewrite program sitting somewhere that can change the event ids inside midas 
> history files, if you have many data files with wrong event id, let me know, I will find this program and tell you 
> how to use it to repair your data files.
> 
> > Currently the only way to get things working again was to recompile midas with 
> > adding -DOLD_HISTORY to the CFLAGS which is troublesome since it is likely to be 
> > forgotton with the next SVN update.
> 
> Yes, I am glad you found OLD_HISTORY, I kept it just for the case some breakage like this happens. I will still 
> keep it around until the dust settles.
> 
> > When looking into the SVN I have the  impression there is something going on concerning the history 
> system, however I couldn't find any documentation.
> 
> Yes, you found the right stuff, and it is partially documented. mlogger uses /History/Events to map history 
> event names (equipment names in your case) to history event ids. But in your case, the wrong event id has 
> been assigned by mlogger so nothing worked right. As a bonus, I now see inconsistency between event_id 
> code remaining in mlogger (which is not used) and event_id code in history_midas (which *is* used). I will be 
> straightening this stuff over the next few days.
> 
> I hope my correction to history_midas.cxx is good enough to get you going for now.
> 
> > What is the best practice for the future, in order not to run into any problems 
> > but still being able to look at the old history (also from within the web-page 
> > via mhttpd)?
> 
> Personally, I think that the midas history storage into binary files is not robust enough
> when facing changes to equipment and event ids, renaming and deleting of stuff, etc. There
> are other limitations, as well, i.e. the 16-bit history event id, etc.
> 
> The newly implemented SQL history storage (uses ODBC layer, MySQL supported, PgSQL partially
> implemented) does not have any of these problems and seems to work well enough
> for T2K/ND280. Sometimes MySQL history is even faster when making history plots in mhttpd.
> 
> I am now thinking about implementing SQL history storage in SQLite files, and it will not have
> any of these problems, too. Performance and robustness for database corruption remain a question, though.
> 
> K.O.
Entry  23 Dec 2010, Konstantin Olchanski, Bug Report, odb corruption, odb race condition? 
The following script makes midas very unhappy and eventually causes odb corruption. I suspect the reason is some kind of race condition collision between client 
creation and destruction code and the watchdog activity (each client periodically runs cm_watchdog() to check if other clients are still alive, O(NxN) total complexity). 
Amongst messages appearing in midas.log:

Thu Dec 23 11:59:08 2010 [ODBEdit28,INFO] Client 'unknown' on buffer 'SYSMSG' removed by bm_open_buffer because client pid 20463 does not exist
Thu Dec 23 11:59:09 2010 [ODBEdit43,INFO] Client 'unknown' on buffer 'SYSMSG' removed by cm_watchdog because client pid 20465 does not exist
Thu Dec 23 12:11:21 2010 [ODBEdit,ERROR] [odb.c:1061:db_open_database,ERROR] Removing client 'ODBEdit11', pid 21536, index 27 because the pid no longer exists
Thu Dec 23 17:06:15 2010 [ODBEdit,ERROR] [odb.c:988:db_open_database,ERROR] maximum number of clients exceeded
Thu Dec 23 12:10:30 2010 [ODBEdit9,ERROR] [odb.c:3247:db_get_value,ERROR] "Name" is of type NULL, not STRING

The last message about <"Name" is of type NULL> appears during normal operation of the ND280 DAQ, leading me into these investigations.

Notes:
a) the script runs at most 50 copies of odbedit, never exceeding midas.h MAX_CLIENTS value 64, so one does not expect to see messages about "maximum number of 
clients exceeded"
b) the script runs 50 copies of odbedit in parallel, increasing the likelihood of whatever race condition is causing this. In the ND280 system, likelihood of failure is 
increased by the large number of running clients (10-20-30 clients), each client running periodic cm_watchdog, to collide with new client creation or destruction.
c) in other experiments, we do not see this (ok, we do have midas meltdowns once in a while) because (1) we tend to have fewer clients (reduced frequency of 
cm_watchdog), (2) we tend to not start and stop midas clients too often (reduced frequency of running client creation and destruction). (NB it seems like ND280 people 
tend to run many scripts containing odbedit commands, so they effectively start and stop midas clients more often than usual).


#!/usr/bin/perl -w
#$cmd = "odbedit -c \'scl -w\' &";
$cmd = "odbedit -c \'ls -l /system/clients\' &";
for (my $i=0; $i<50; $i++)
{
system $cmd;
}
#end
    Reply  24 Dec 2010, Konstantin Olchanski, Bug Report, odb corruption, odb race condition? 
> Thu Dec 23 12:10:30 2010 [ODBEdit9,ERROR] [odb.c:3247:db_get_value,ERROR] "Name" is of type NULL, not STRING

This is caused by a race condition between client removal in cm_delete_client_info() and cm_exist().

The race condition in cm_exist() works like this:
- db_enum_key() returns the hkey (pointer to) the next /System/Clients/PID directory
- the client corresponding to PID is removed, our hkey now refers to a deleted entry
- db_get_value() tries to use the now stale hkey pointing to a deleted entry, complains about invalid key TID.

Because the offending db_get_value() is called with the "create if not found" argument set to TRUE, there is potential
for writing into ODB using a stale hkey, maybe leading to ODB corruption. Other than that, this race condition seems
to be benign.

cm_exist() is called from:
everybody->cm_yield()->al_check()->cm_exist()

Further analysis:
- cm_yield() calls al_check() every 10 sec, al_check() calls cm_exist() to check for "program is not running" alarms.
- in al_check() cm_exist() is called once for each entry in /Programs/xxx, even for programs with no alarms. (Maybe I should change this?)
- assuming 10 programs are running (10 clients), every 10 seconds, cm_exist() will be called 10 times and inside, will loop over 10 clients, exposing the enum-get race condition 10*10=100 times every 10 seconds. Usually, 
ODB /Programs/ has many more entries than there are active clients, further increasing the frequency of exposure of this race condition.

K.O.
       Reply  24 Dec 2010, Konstantin Olchanski, Bug Report, odb corruption, odb race condition? 
> > Thu Dec 23 12:10:30 2010 [ODBEdit9,ERROR] [odb.c:3247:db_get_value,ERROR] "Name" is of type NULL, not STRING
> This is caused by a race condition between client removal in cm_delete_client_info() and cm_exist().
> ... this race condition seems to be benign.

Not so benign - after fixing cm_exist() to check the return value of db_get_value() and calling it without the "create" flag,
a crasher turned up inside db_find_key() called by db_get_value() with these stale hkeys. For invalid keys (not TID_KEY),
it would call db_get_path() and crash.

So after adding a check for valid key types, my test script runs much better - all the major weirdness is gone, I only see
rare messages from db_find_key(), db_get_key() and db_get_value() about invalid key and data types (after all,
I did not fix the underlying race condition).

The only remaining problem when running my script is some kind of deadlock between the ODB and SYSMSG semaphores...

K.O.
          Reply  26 Dec 2010, Konstantin Olchanski, Bug Report, race condition and deadlock between ODB lock and SYSMSG lock in cm_msg() 
> 
> The only remaining problem when running my script is some kind of deadlock between the ODB and SYSMSG semaphores...
> 


In theory, we understand how programs that use 2 semaphores to protect 2 shared resources can deadlock
if there are mistakes in how locks are used.

For example, consider 2 semaphores A and B and 2 concurrent
subroutines foo() and bar() running at exactly the same time:

foo() { lock(A); lock(B); do stuff; unlock(B); unlock(A); } and
bar() { lock(B); lock(A); do stuff; unlock(A); unlock(B); }

This system will deadlock immediately with foo() taking semaphore A, bar() taking semaphore B,
then foo() waiting for B and bar() waiting for A forever.

This situation can also be described as a race condition where foo() and bar() are racing each
other to get the semaphores, with the result depending on who gets there first
and, in this case, sometimes the result is deadlock.

In this example, the size of the race condition time window is the wall clock time
between actually locking both semaphores in the sequence "lock(X); lock(Y);". While
locking a semaphore is "instantaneous", the actual function lock() takes time to call
and execute, and this time is not fixed - it can change if the CPU takes a hardware
interrupt (quick), a page fault (when we may have to wait until data is read from the swap file)
or a scheduler interrupt (when we are outright stopped for milliseconds while the CPU runs
some other process).

In reality, subroutines foo() and bar() do not run at exactly the same time, so the probability
of deadlock will depend on how often foo() and bar() are executed, the size of the race condition time window,
the number of processes executing foo() and bar(), and the amount of background activity
like swapping, hardware interrupts, etc.

(Also note that on a single-cpu system, we will probably never see a deadlock between foo() and bar()
because they will never be running at the same time. But the deadlock is still there, waiting
for the lucky moment when the scheduler switches from foo() to bar() just at the wrong place).

There is more on deadlocks and stuff written at:
http://en.wikipedia.org/wiki/Deadlock
http://en.wikipedia.org/wiki/Race_condition

In case of MIDAS, the 2 semaphores are the ODB lock and the SYSMSG lock (also remember about locks
for the shared memory event buffers, SYSTEM, etc, but they seem to be unlikely to deadlock).

The function foo() is any ODB function (db_xxx) that locks ODB and then calls cm_msg() (which locks SYSMSG).

The function bar() is cm_msg() which locks SYSMSG and then calls some ODB db_xxx() function which tries to lock ODB.

(This is made more interesting by cm_watchdog() periodically called by alarm(), where we alternately
take SYSMSG (via bm_cleanup) and ODB locks.)

I think this establishes a theoretical possibility for MIDAS to deadlock on the ODB and SYSMSG semaphores.

In practice, I think we almost never see this deadlock because cm_msg() is not called very often, and during normal
operation, is almost never called from inside ODB functions holding the ODB lock - almost all calls to cm_msg from
ODB functions are made to report some kind of problem with the ODB internal structure, something that "never"
happens.

By "luck" I stumbled into this deadlock when doing the "odbedit" fork-bomb torture tests, when high ODB lock
activity is combined with high cm_msg() activity reporting clients starting and stopping, combined with a large
number of MIDAS clients running, starting and stopping.

So a deadlock I see within 1 minute of running the torture test, other lucky people will see after running an experiment
for 1 year, or 1 month, or 1 day, depending.

In theory, this deadlock can be removed by establishing a fixed order of taking locks. There will never be a deadlock
if we always take the SYSMSG lock first, then ask for the ODB lock.

In practice, it means that using cm_msg() while holding an ODB lock is automatically dangerous
and should be avoided if not forbidden.

And it does work. By refactoring a few places in client startup, shutdown and cleanup code, I made the deadlock "go away",
and my test script (posted in my first message) no longer deadlocks, even if I run hundreds of odbedit's at the same time.

Unfortunately,  it is impractical to audit and refactor all of MIDAS to completely remove this problem. MIDAS call graphs
are sufficiently complicated for making manual analysis of lock sequences infeasible and
I expect any automatic lock analysis tool will be defeated by the cm_watchdog() periodic interrupt.

An improvement is possible if we make cm_msg() safe for calling from inside the ODB db_xxx() function. Instead
of immediately sending messages to SYSMSG (requiring a SYSMSG lock), if ODB is locked, cm_msg() could
save the messages in a buffer, which would be flushed when the ODB lock is released. (This does not fix
all the other places that take ODB and SYSMSG locks in arbitrary order, but I think those places are not as
likely to deadlock, compared to cm_msg()).

However, now that I have greatly reduced the probability of deadlock in the client startup/shutdown/cleanup code,
maybe there is no urgency for changing cm_msg() - remember that if we do not call cm_msg() we will never deadlock -
and during normal operation, cm_msg() is almost never called.

Investigation completed, I will now cleanup, retest and commit my changes to midas.c and odb.c. Looking into this
and writing it up was a good intellectual exercise.

P.S. Also remember that there are locks for shared memory event buffers (SYSTEM, etc), but those do not involve
lock inversion leading to deadlock. I think all lock sequences are like this: SYSTEM->ODB, SYSTEM->SYSMSG->ODB,
there are no inverted sequences SYSMSG->SYSTEM or ODB->SYSTEM and the only deadlocking
sequence SYSTEM->ODB->SYSMSG, does not really involve the SYSTEM lock.

K.O.
             Reply  29 Dec 2010, Konstantin Olchanski, Bug Report, use of nested locks in MIDAS 
A "nested" or "recursive" lock is a special type of lock that permits a lock holder to lock the same resources again and again, without deadlocking on itself. They are 
very useful, but tricky to implement because most system lock primitives (SYSV semaphores, POSIX mutexes, etc) do not permit nested locks, so all the logic for 
"yes, I am the holder of the lock, yes, I can go ahead without taking it again" (plus the reverse on unlocking) has to be done "by hand". As ever, if implemented 
wrong or used wrong, Bad Things happen. Many people dislike nested locks because of the added complexity, but realistically, it is impossible to build a system 
that does not require nested locking at least somewhere.

MIDAS lock primitives - ss_semaphore_wait_for(), db_lock_database() and bm_lock_buffer() implement a type of nested locks.

ODB locks implemented in db_lock_database() fully support nested (recursive) locking and this feature is heavily used by the ODB library. Many ODB db_xxx() 
functions take the ODB lock, do something, then call another ODB function that also takes the ODB lock recursively. This works well.

Unfortunately, the ODB nested lock implementation is NOT thread-safe. (Unless one is connected through the mserver, in which case, db_xxx() functions ARE 
thread-safe because all ODB access is serialized by the mserver RPC mutex).

Event buffer locks implemented in bm_lock_buffer() rely on ss_semaphore_xxx() to provide nested locking.

ss_semaphore_wait_for() uses SYSV semaphores, which do not provide nested locking, except when called from cm_watchdog(). (keep reading).

Because bm_lock_buffer() does not implement nested locking, use of cm_msg() in buffer management code will lead to self-deadlock, as shown in the following 
stack trace, where bm_cleanup() is working on the SYSMSG buffer, locked it, then called cm_msg() which is now waiting on the SYSMSG lock, which we are holding 
ourselves.

(gdb) where
#0  0x00007fff87274e9e in semop ()
#1  0x0000000100024075 in ss_semaphore_wait_for (semaphore_handle=1179654, timeout=300000) at src/system.c:2280
#2  0x0000000100015292 in bm_lock_buffer (buffer_handle=<value temporarily unavailable, due to optimizations>) at src/midas.c:5386
#3  0x000000010000df97 in bm_send_event (buffer_handle=1, source=0x7fff5fbfd430, buf_size=<value temporarily unavailable, due to optimizations>, 
async_flag=0) at src/midas.c:6484
#4  0x000000010000e6f5 in cm_msg (message_type=2, filename=<value temporarily unavailable, due to optimizations>, line=4226, routine=0x10004559f 
"bm_cleanup", format=0x100045550 "Client '%s' on buffer '%s' removed by %s because process pid %d does not exist") at src/midas.c:722
#5  0x000000010001553c in bm_cleanup_buffer_locked (i=<value temporarily unavailable, due to optimizations>, who=0x100045f42 "bm_open_buffer", 
actual_time=869425784) at src/midas.c:4226
#6  0x00000001000167ee in bm_cleanup (who=0x100045f42 "bm_open_buffer", actual_time=869425784, wrong_interval=0) at src/midas.c:4286
#7  0x000000010001ae27 in bm_open_buffer (buffer_name=<value temporarily unavailable, due to optimizations>, buffer_size=100000, 
buffer_handle=0x10006e9ac) at src/midas.c:4550
#8  0x000000010001ae90 in cm_msg_register (func=0x100000c60 <process_message>) at src/midas.c:895
#9  0x0000000100009a13 in main (argc=3, argv=0x7fff5fbff3d8) at src/odbedit.c:2790

This example deadlock is not a normal code path - I accidentally exposed this deadlock sequence by adding some extra locking.

But in normal use, cm_msg() is called quite often from cm_watchdog() and as protection against this type of deadlock, MIDAS
ss_semaphore_xxx() has a special case that permits one level of nesting for locks called by code executed from cm_watchdog(). This is a very
clever implementation of partial nested locking.

So again, we are running into problems with cm_msg() - logically it should be at the very bottom of the system hierarchy - everybody calls it from their most 
delicate places, while holding various locks, etc - but instead, cm_msg() call the whole MIDAS system all over again - it calls ODB functions, event buffer functions, 
etc - mostly to open and to write into the SYSMSG buffer.

If you are reading this, I hope you are getting a better idea of the difference between textbook systems and systems that are used in the field to get some work 
done.

K.O.
          Reply  29 Dec 2010, Konstantin Olchanski, Bug Report, fixed. odb corruption, odb race condition? 
> 
> The only remaining problem when running my script is some kind of deadlock between the ODB and SYSMSG semaphores...
> 


I committed changes to odb.c and midas.c fixing a number of places that could corrupt ODB and SYSMSG data, and fixing a number of deadlocks. Without these 
changes, on my Mac, MIDAS will reliably corrupt ODB or deadlock while running my odbedit fork-bomb torture test script. These changes still need to be tested on 
Linux (but I do not expect any problems).

Because my changes do not fix the original race condition in client creation/removal/cleanup, you may still occasionally see messages like this:
13:35:14 [ODBEdit24,ERROR] [odb.c:2112:db_find_key,ERROR] hkey 169592 invalid key type 376
13:35:15 [ODBEdit28,ERROR] [odb.c:3268:db_get_value,ERROR] hkey 162072 entry "Name" is of type NULL, not STRING

For now, I am happy that we no longer corrupt ODB (nor deadlock) and I will work with Stefan on a permanent solution for this.

Special thanks go to the T2K/ND280 experiment, specifically, to Tim Nicholls and to the unnamed person who emailed me their script that executes many odbedit 
commands to setup midas history plots.


svn rev 4930
K.O.


P.S. Below is my torture test script, I usually run many of them in a sequence "./test1.perl >& xxx1; ./test1.perl >& xxx2; ... etc".

#!/usr/bin/perl -w
for (my $i=0; $i<50; $i++)
{
#my $cmd = "odbedit -c \'scl -w\' &";
#my $cmd = "odbedit -c \'ls -l /system/clients\' >& xxx$i &";
my $cmd = "odbedit -c \'ls -l /system/clients\' &";
system $cmd;
}
#end

svn rev 4930
K.O.
             Reply  11 Feb 2011, Konstantin Olchanski, Bug Report, fixed. odb corruption, odb race condition? 
> > 
> > The only remaining problem when running my script is some kind of deadlock between the ODB and SYSMSG semaphores...
> > 
> 
> For now, I am happy that we no longer corrupt ODB (nor deadlock) ...
>

Found one more deadlock between ODB and SYSMSG semaphores, this time through cm_watchdog():

If cm_watchdog somehow runs while we are holding the ODB semaphore, it will eventually try to lock SYSMSG (through bm_cleanup & co) in
violation of our semaphore locking order. If at the same time another application tries to lock stuff using the correct order (SYSMSG first, ODB last),
the two programs will deadlock (wait for each other forever). I presently have two copies of gdb attached to two copies of odbedit
waiting for each other in a deadlock through this cm_watchdog scenario...

Solution shall follow quickly, I have been hunting this deadlock for the last couple of weeks...

K.O.
                Reply  15 Feb 2011, Konstantin Olchanski, Bug Report, fixed. odb corruption, odb race condition? 
> Solution shall follow quickly, I have been hunting this deadlock for the last couple of weeks...

Over the last couple of days I made a series of commits to odb.c and midas.c to implement a buffer-based cm_msg()
and fix the latest deadlock problem, also to help with the race conditions in client creation and cleanup.

My torture test runs okey in my mac now, one remaining problem is spurious client removal caused
by semaphore starvation - I see 2-3-7-10 sec wait times for semaphores - probably caused by some
kind of unfairness in the MacOS SysV semaphore implementation (in a "fair" semaphore implementation,
the process that waited the longest would be woken up the first and one would never see semaphore wait
times measured in seconds). Probably worth investigating fairness of MacOS posix semaphores. On LInux
things are probably different and under normal running conditions one should not see any semaphore starvation.

I will be doing extensive tests of this update at TRIUMF, but I do not expect any problems. If you use this
version and see any anomalies, please report them as replies to this message or email me directly.

svn rev 4976
K.O.
                   Reply  16 Feb 2011, Konstantin Olchanski, Bug Report, fixed. odb corruption, odb race condition? 
> My torture test runs okey in my mac now, one remaining problem is spurious client removal caused
> by semaphore starvation...

My torture test runs okey on Linux and I do not see any problems with spurious client removal - actually
I do not see any strange longs waits for semaphores that I was seeing on MacOS. Must be another
proof that MacOS is years behind Linux in kernel technology (but parsecs ahead in user experience)

K.O.
Entry  16 Feb 2011, Konstantin Olchanski, Info, Notes on MIDAS history 
Some notes on the MIDAS history.

MIDAS documentation at
http://midas.psi.ch/htmldoc/F_History_logging.html
describes:

- midas equipment concepts
- midas equipment event ids
- midas data banks
- midas history concepts
- history records (correspond to data banks)
- history record ids (correspond to equipment ids)
- history tags (describe the structure
- describes the code path from the user read function through odb to the mlogger to the history file
- midas history file internal data format
- documents the tool for looking inside history files - mhdump

But some things remain unclear after reading the documentation - where are the history definitions 
saved? what happens if an equipment is deleted or renamed? what's all the mumbling about 
/History/Events and /History/Tags? what's this /History/PerVariableHistory?

As I go through my review of the MIDAS history code, I will attempt to clarify some of this information.

1) PerVariableHistory.

The default value of 0 is intended to operate the midas history in "traditional" mode. In this mode:
- there is one history record for each equipment
- history record id is equal to the equipment id
- /History/Events and /History/Tags are not required and can be safely deleted

The downside of this history mode is that there is only one history record per equipment. If some 
equipment has many banks not all of which are updated all at the same time, every time one bank is 
updated, data for all banks is written to the history file, even if data in all those other banks had not 
changed. The result is undesired duplication of data in midas history files. In turn, this leads to slow 
down while making history plots (mhttpd has to read more data from bigger data files, which takes time) 
and for long running experiments may pose problems with disk space for storing history files.

In addition, when logging history data into an SQL database, each history record is mapped into an SQL 
table, so all variables from all banks of an equipment end up in the same SQL table - and in addition to 
data duplication described above, a data presentation problem is created - database users and 
administrators dislike having SQL tables with "too many" columns!

To solve both problems - reduce data duplication and avoid creating over-large SQL tables - per-
variable history has been implemented.

to be continued...

K.O.
    Reply  16 Feb 2011, Konstantin Olchanski, Info, Notes on MIDAS history 
> 
> 1) PerVariableHistory.
> 
> The default value of 0 is intended to operate the midas history in "traditional" mode. In this mode:
> - there is one history record for each equipment
> - history record id is equal to the equipment id
> - /History/Events and /History/Tags are not required and can be safely deleted
>

I now commited an example experiment for testing and using non-per-variable history:
.../midas/examples/history1

I confirm that this example does work as expected after src/history_midas.cxx is updated to latest rev 4979 (today). I guess it also worked just 
fine before breakage in svn rev 4827 last September.

svn rev 4980.



Here is the README file:

Example experiment "history1"

Purpose:
example and test of a simple periodic frontend writing slow controls data into midas history

To run:
use bash shell
assuming MIDAS is installed in $HOME/packages/midas on linux, otherwise edit setup.sh and Makefile
run make to build feslow.exe
run source ./setup.sh
when starting this experiment for the very first time, load experiment settings from odb.xml: odbedit -c "load odb.xml"
run ./start_daq.sh
mlogger and mhttpd should now be running
connect to the midas status page at http://localhost:8080 (port number is set in start_daq.sh
start the example frontend from the "programs" page
observe event number of equipment "slow" is incrementing
go to the "Slow" equipment page (click on "Slow" on the midas status page)
observe numbers are changing when you refresh the web page
from the midas status page, go to "history" -> "slow" - observe history plot for "SLOW[2]" shows a sine wave
from shell, examine contents of history file: "mhdump *.hst"
study feslow.cxx

Enjoy,
K.O.
Entry  15 Feb 2011, Konstantin Olchanski, Bug Fix, mlogger stop run on disk full! 
The mlogger has a function for detecting when the output disk becomes full - when this condition is 
detected, the run should be stopped. But this did not work if disk is already full and the user tries to start 
a run - the "disk full?" check happened too early and the attempt to stop the run was not succeeding 
because the original start-run transition is still running. Now if "disk full" condition is detected, mlogger 
tries to stop the run every 10 seconds until the run is finally stopped (or dies because disk is full).

mlogger.c svn rev 4976
K.O.
Entry  15 Dec 2010, Stefan Ritt, Info, New source file structure of MSCB tree 
A long planned modification of the source file structure of the MSCB subsystem has been implemented. This is however only for those people who do actively participate in micro controller programming with MSCB. The idea behind this is tha the central include file mscbemb.h had a section for each new project. So whenever a new project was added, this file had to be modified which is clumsy and hard to maintain. Therefore I took the project specific sections out of this file and put it into a config.h file, which is separate for each project (very similar to VxWorks). So the folder tree now looks like this:
midas\mscb\embedded
  \include                <- place for framework include file mscbemb.h
  \lib                    <- precompiled TCP/IP library for SCS-260 submaster
  \src                    <- framework sources mscbmain.c and mscbutil.c
  \<project1>             <- separate folder for project1
      config.h            <- config file for project1
  \<project2>             <- separate folder for project2
      config.h            <- config file for project2
  ...
  \experiment
     \<experiment1>
     \<experiment2>

So each project has it's own config.h, which is included from the central mscbemb.h and can be used to enable certain features of the framework without having to change the framework itself. The "projectx" folders contain devices which are used across several experiments and sometimes also between institutes (PSI and TRIUMF). If you make a device which is only used in a specific experiment, this should go under \experiment with the name of the device or the experiment as a subdirectory. I encourage everybody to submit even specific projects to the subversion system since they can sometimes be useful for others to look at some example code.

A few other things have to be changed in order to adapt to the new structure:

  • The framework files mscbmain.c mscbutil.c and mscbemb.h have moved and therefore they have to be re-added to the projects in the Keil MicroVision Development Environment.
  • The name of the device should not be defined under compiler settings (Project Options/C51/Preprocessor Symbols), but put directly into the config.h file associated with the project.
  • The include paths in the compile have to be changed and point to \midas\mscb\embedded\include
  • The file config.h has to be copied from a similar project and adjusted to fit the new project.

I did remove all project specific sections from mscbemb.h in the current SVN version, so certain projects (FDB_008 at TRIUMF, CRATE_MONITOR and PT100X8 at PSI) have to retrieve the settings (like LED ports etc.) from the old mscbemb.h and put it into the config.h file.

Furthermore there is a new STARTUP_VDDMON.A51 file in the src directory which should be added to each project. This was recommended by the micro controller manufacturer and fixes cases where the EEPROM contents of the CPU gets lost from time to time during power up.

The last thing is that PSI switched to MicroVision 4 as the development environment, so I added new project files (*.uvproj and *.uvopt instead *.Uv2), but I left the old ones there in case someone still has the uV2 environment. They are however not maintained any more.

If there is any problem with the new structure or you have some comments, please don't hesitate to contact me.


- Stefan
Entry  06 Oct 2010, Konstantin Olchanski, Bug Report, mhttpd "edit on start" breakage 
very recent mhttpd mangles spaces in URL encoding-decoding and I cannot create or delete entries in for 
example "/experiment/edit on start". For example attempt to delete "/experiment/Pedestals Run" 
produces:
<h1>Cannot find key Experiment/edit%20on%20start/Pedestals run</h1>
(notice "%20" instead of spaces. further navigation sometimes replaces the "%" sign with "%25" making it 
even more mangled)

this used to work. looks like a call to URL unmangling went missing somewhere.
K.O.
    Reply  17 Nov 2010, Stefan Ritt, Bug Report, mhttpd "edit on start" breakage 
> very recent mhttpd mangles spaces in URL encoding-decoding and I cannot create or delete entries in for 
> example "/experiment/edit on start". For example attempt to delete "/experiment/Pedestals Run" 
> produces:
> <h1>Cannot find key Experiment/edit%20on%20start/Pedestals run</h1>
> (notice "%20" instead of spaces. further navigation sometimes replaces the "%" sign with "%25" making it 
> even more mangled)
> 
> this used to work. looks like a call to URL unmangling went missing somewhere.
> K.O.

Thanks for reporting. Fixed in SVN revision 4882. Actually I outcommented the fix some time ago and forgot to 
put it back. Now I hope that this does not blow anything else...

- Stefan
ELOG V3.1.4-2e1708b5