Back Midas Rome Roody Rootana
  Midas DAQ System, Page 2 of 45  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
Entry  22 Jun 2004, Exaos Lee, , How to compile under Darwin-gcc? (MacOS X) 
I add the following to makefile and try to treat Darwin as FreeBSD/Linux.
But I failed.
============ 
#-----------------------
# This is for MacOS X
#
ifeq ($(OSTYPE), Darwin)
CC = gcc
OS_DIR = Darwin
OSFLAGS = -DOS_DARWIN -DOS_LINUX
LIBS = -lbsd -lcompat
SPECIFIC_OS_PRG =
endif
============

I got the following errors:
=============
gcc -c -g -O2 -Wall -Iinclude -Idrivers -LDarwin/lib -DINCLUDE_FTPLIB  
-DOS_DARWIN -DOS_FREEBSD -o Darwin/lib/midas.o src/midas.c
In file included from include/midasinc.h:45,
                 from include/msystem.h:114,
                 from src/midas.c:623:
/usr/include/string.h:112: error: conflicting types for `strlcat'
include/midas.h:1701: error: previous declaration of `strlcat'
/usr/include/string.h:113: error: conflicting types for `strlcpy'
include/midas.h:1700: error: previous declaration of `strlcpy'
In file included from include/msystem.h:114,
                 from src/midas.c:623:
include/midasinc.h:161:21: sys/vfs.h: No such file or directory
include/midasinc.h:164:17: pty.h: No such file or directory
src/midas.c:780: error: conflicting types for `dbg_malloc'
include/midas.h:1478: error: previous declaration of `dbg_malloc'
src/midas.c:817: error: conflicting types for `dbg_calloc'
include/midas.h:1479: error: previous declaration of `dbg_calloc'
src/midas.c:858: error: conflicting types for `strlcpy'
/usr/include/string.h:113: error: previous declaration of `strlcpy'
src/midas.c:892: error: conflicting types for `strlcat'
/usr/include/string.h:112: error: previous declaration of `strlcat'
gmake: *** [Darwin/lib/midas.o] Error 1
==========

Could anyone give me some hints. Thanks!
    Reply  22 Jun 2004, Konstantin Olchanski, , How to compile under Darwin-gcc? (MacOS X) 
The current (cvs) version of MIDAS should build on Mac OS X right out of the
box- I fixed all the problems you report back in February(?)- see the macosx
thread in this forum. A few weeks ago I verified that it still compiles on Mac
OS 10.3.4. The Mac OS port received minimal testing- I checked that "odbedit"
and "mhttpd" run, that's about it. K.O.

> I add the following to makefile and try to treat Darwin as FreeBSD/Linux.
> But I failed.
> ============ 
> #-----------------------
> # This is for MacOS X
> #
> ifeq ($(OSTYPE), Darwin)
> CC = gcc
> OS_DIR = Darwin
> OSFLAGS = -DOS_DARWIN -DOS_LINUX
> LIBS = -lbsd -lcompat
> SPECIFIC_OS_PRG =
> endif
> ============
> 
> I got the following errors:
> =============
> gcc -c -g -O2 -Wall -Iinclude -Idrivers -LDarwin/lib -DINCLUDE_FTPLIB  
> -DOS_DARWIN -DOS_FREEBSD -o Darwin/lib/midas.o src/midas.c
> In file included from include/midasinc.h:45,
>                  from include/msystem.h:114,
>                  from src/midas.c:623:
> /usr/include/string.h:112: error: conflicting types for `strlcat'
> include/midas.h:1701: error: previous declaration of `strlcat'
> /usr/include/string.h:113: error: conflicting types for `strlcpy'
> include/midas.h:1700: error: previous declaration of `strlcpy'
> In file included from include/msystem.h:114,
>                  from src/midas.c:623:
> include/midasinc.h:161:21: sys/vfs.h: No such file or directory
> include/midasinc.h:164:17: pty.h: No such file or directory
> src/midas.c:780: error: conflicting types for `dbg_malloc'
> include/midas.h:1478: error: previous declaration of `dbg_malloc'
> src/midas.c:817: error: conflicting types for `dbg_calloc'
> include/midas.h:1479: error: previous declaration of `dbg_calloc'
> src/midas.c:858: error: conflicting types for `strlcpy'
> /usr/include/string.h:113: error: previous declaration of `strlcpy'
> src/midas.c:892: error: conflicting types for `strlcat'
> /usr/include/string.h:112: error: previous declaration of `strlcat'
> gmake: *** [Darwin/lib/midas.o] Error 1
> ==========
> 
> Could anyone give me some hints. Thanks!
       Reply  23 Jun 2004, Exaos Lee, , How to compile under Darwin-gcc? (MacOS X) 
> The current (cvs) version of MIDAS should build on Mac OS X right out of the
> box- I fixed all the problems you report back in February(?)- see the macosx
> thread in this forum. A few weeks ago I verified that it still compiles on Mac
> OS 10.3.4. The Mac OS port received minimal testing- I checked that "odbedit"
> and "mhttpd" run, that's about it. K.O.
> 

Thanks a lot. But I cannot checkout module:
------------
01:52:16: pc2075.psi.ch: Operation timed out
01:52:16: cvs [checkout aborted]: end of file from server (consult above
messages if any)
------------

Could anybody add download tar package in the WWW interface of CVS repository.
I know the original CGI script has such a feature. Thanks.

P.S.
I use these commands to checkout:
   cvs -e ssh -d :ext:cvs@midas.psi.ch:/usr/local/cvsroot checkout midas
   cvs -e ssh -d :ext:cvs@midas.psi.ch:/usr/local/cvsroot update
          Reply  23 Jun 2004, Stefan Ritt, , How to compile under Darwin-gcc? (MacOS X) 
> Thanks a lot. But I cannot checkout module:
> ------------
> 01:52:16: pc2075.psi.ch: Operation timed out
> 01:52:16: cvs [checkout aborted]: end of file from server (consult above
> messages if any)
> ------------

Should work fine, just tried from outside PSI. Please check again.

> Could anybody add download tar package in the WWW interface of CVS repository.
> I know the original CGI script has such a feature. Thanks.

The tar package is only done for a new release (which will happen in the next days
BTW), so http://midas.psi.ch/download/tar/ contains the most recent packages. Upon
request I make a midas-snapshot.tar.gz, but since there will be a 1.9.4 soon, it's
maybe not necessary right now.
             Reply  23 Jun 2004, Exaos Lee, , How to compile under Darwin-gcc? (MacOS X) 
> 
> Should work fine, just tried from outside PSI. Please check again.

Unfortunately, I still encounter the same problem. 
---
pc2075.psi.ch: Operation timed out
cvs [checkout aborted]: end of file from server (consult above messages if any)
---

I am in LNS-INFN (Italy), i.e., I am outside PSI. So ... what's the problem? I try to
ping the host, and it is reachable:
--------
[exaos@exaos cvsnew]$ ping midas.psi.ch
PING pc2075.psi.ch (129.129.228.23): 56 data bytes
64 bytes from 129.129.228.23: icmp_seq=0 ttl=50 time=67.237 ms
64 bytes from 129.129.228.23: icmp_seq=1 ttl=50 time=64.202 ms
64 bytes from 129.129.228.23: icmp_seq=2 ttl=50 time=56.278 ms
...
--------
Is it the problem of firewall? I am not sure. So strange.

> 
> The tar package is only done for a new release (which will happen in the next days
> BTW), so http://midas.psi.ch/download/tar/ contains the most recent packages. Upon
> request I make a midas-snapshot.tar.gz, but since there will be a 1.9.4 soon, it's
> maybe not necessary right now.

Waiting for the new release ...
             Reply  28 Jun 2004, Exaos Lee, , Linking Error: g++ -rpath? 
I cannot checkout from the cvs server. So I download each latest file from the WWW
interface of CVS. While compiling these files, I encountered the following problems:
-------------
...
g++ -DHAVE_ROOT -c -g -O2 -Wall -Iinclude -Idrivers -Ldarwin/lib -DINCLUDE_FTPLIB  
-DOS_LINUX -DOS_DARWIN -DHAVE_STRLCPY -fPIC -Wno-unused-function -D_REENTRANT
-I/sw/include -I/opt/root/current/include -Wl,-rpath,/opt/root/current/lib -o
darwin/lib/rmana.o src/mana.c
g++: -rpath: linker input file unused because linking not done
g++: /opt/root/current/lib: linker input file unused because linking not done
...
g++ -g -O2 -Wall -Iinclude -Idrivers -Ldarwin/lib -DINCLUDE_FTPLIB   -DOS_LINUX
-DOS_DARWIN -DHAVE_STRLCPY -fPIC -Wno-unused-function -DHAVE_ROOT -D_REENTRANT
-I/sw/include -I/opt/root/current/include -Wl,-rpath,/opt/root/current/lib -o
darwin/bin/mlogger src/mlogger.c darwin/lib/libmidas.a -L/opt/root/current/lib -u
_G__cpp_setupG__Hist -u _G__cpp_setupG__Graf1 -u _G__cpp_setupG__G3D -u
_G__cpp_setupG__GPad -u _G__cpp_setupG__Tree -u _G__cpp_setupG__Rint -u
_G__cpp_setupG__PostScript -u _G__cpp_setupG__Matrix -u _G__cpp_setupG__Quadp -u
_G__cpp_setupG__Physics -lCore -lCint -lHist -lGraf -lGraf3d -lGpad -lTree -lRint
-lPostscript -lMatrix -lQuadp -lPhysics -lpthread -lm -L/sw/lib -ldl -lpthread
ld: unknown flag: -rpath
gmake: *** [darwin/bin/mlogger] Error 1
---------------
What does '-rpath' mean? It is just a linking error. Thanks.
                Reply  28 Jun 2004, Konstantin Olchanski, , Linking Error: g++ -rpath? 
> ld: unknown flag: -rpath
> gmake: *** [darwin/bin/mlogger] Error 1

Fixed. Good catch.

> What does '-rpath' mean?

You will have to read the "ld" manual. In the nutshell, it tells the executable where to look for shared libraries. 
Aparently it is not supported by Mac OS X.

K.O.
Entry  07 May 2004, Konstantin Olchanski, , min(a,b) in mana.c and mlogger.c 
When I compile current cvs-head midas, I get errors about undefined function
min(). I do not think min() is in the list of standard C functions, so
something else should be used instead, like a MIN(a,b) macro. To make life
more interesting, in a few places, there is also a variable called "min".
Here is the error:

src/mana.c: In function `INT write_event_ascii(FILE*, EVENT_HEADER*, 
   ANALYZE_REQUEST*)':
src/mana.c:2571: `min' undeclared (first use this function)
src/mana.c:2571: (Each undeclared identifier is reported only once for each 
   function it appears in.)
make: *** [linux/lib/rmana.o] Error 1

K.O.
    Reply  07 May 2004, Stefan Ritt, , min(a,b) in mana.c and mlogger.c 
> When I compile current cvs-head midas, I get errors about undefined function
> min(). I do not think min() is in the list of standard C functions, so
> something else should be used instead, like a MIN(a,b) macro. To make life
> more interesting, in a few places, there is also a variable called "min".
> Here is the error:
> 
> src/mana.c: In function `INT write_event_ascii(FILE*, EVENT_HEADER*, 
>    ANALYZE_REQUEST*)':
> src/mana.c:2571: `min' undeclared (first use this function)
> src/mana.c:2571: (Each undeclared identifier is reported only once for each 
>    function it appears in.)
> make: *** [linux/lib/rmana.o] Error 1

This is really a miracle to me. The min/max macros are defined both in midas.h
and msystem.h and worked the last ten years or so. However, I agree that macros
should follow the standard and use capital letters, so I changed that.
       Reply  21 Jun 2004, Piotr Zolnierczuk, , min(a,b) in mana.c and mlogger.c 
> > When I compile current cvs-head midas, I get errors about undefined function
> > min(). I do not think min() is in the list of standard C functions, so
> > something else should be used instead, like a MIN(a,b) macro. To make life
> > more interesting, in a few places, there is also a variable called "min".
> > Here is the error:
> > 
> > src/mana.c: In function `INT write_event_ascii(FILE*, EVENT_HEADER*, 
> >    ANALYZE_REQUEST*)':
> > src/mana.c:2571: `min' undeclared (first use this function)
> > src/mana.c:2571: (Each undeclared identifier is reported only once for each 
> >    function it appears in.)
> > make: *** [linux/lib/rmana.o] Error 1
> 
> This is really a miracle to me. The min/max macros are defined both in midas.h
> and msystem.h and worked the last ten years or so. However, I agree that macros
> should follow the standard and use capital letters, so I changed that.

The problem is that /usr/include/c++/3.*/bits/stl_algobase.h contains 
#undef min
#undef max

and in C++ with STL one should really use something like this
    std::min<INT>(a,b)


Cheers
  Piotr
Entry  06 Jun 2004, Konstantin Olchanski, , Makefile: set -rpath 
I commited Makefile bits to set the RPATH on dynamically linked executables
to find libmidas.so and ROOT shared libraries without setting
LD_LIBRARY_PATH , etc. K.O.
Entry  28 Apr 2004, Konstantin Olchanski, , mhttpd "start run" input field length? 
I am setting up a new experiment and I added a "comment" field to "/
Experiment/Edit on start". When I start the run, I see this field, but I
cannot enter anything: the HTML "maxlength" is zero (or 1?). I traced this
to mhttpd.c: if (this is a string) maxlength = key.item_size. But what is
key.item_size for a string? The current length? If so, how do I enter a
string that is longer than the current one (zero in case I start from
scratch). I am stumped! K.O.
    Reply  30 Apr 2004, Stefan Ritt, , mhttpd  
> I am setting up a new experiment and I added a "comment" field to "/
> Experiment/Edit on start". When I start the run, I see this field, but I
> cannot enter anything: the HTML "maxlength" is zero (or 1?). I traced this
> to mhttpd.c: if (this is a string) maxlength = key.item_size. But what is
> key.item_size for a string? The current length? If so, how do I enter a
> string that is longer than the current one (zero in case I start from
> scratch). I am stumped! K.O.

Your problem is that you created a ODB string with zero length. If you do this
through ODBEdit, a default length of 32 is used:

[local:Test:S]Edit on start>cr string Comment
String length [32]:
[local:Test:S]Edit on start>ls -l
Key name                        Type    #Val  Size  Last Opn Mode Value
---------------------------------------------------------------------------
Comment                         STRING  1     32    2s   0   RWD
[local:Test:S]Edit on start>

which then results in a maxlength of 32 as well during run start. I presume
you used mhttpd itself to create the string. Trying to reporduce this, I found
that mhttpd creates strings with zero length. I will fix this soon. Until
then, use ODBEdit to create your strings.
Entry  30 Mar 2004, Konstantin Olchanski, , elog fixes elog-fixes.txt
I am about to commit the mhttpd Elog fixes we have been using in TWIST since
about October. The infamous Elog "last N days" problem is fixed, sundry
memory overruns are caught and assert()ed.

For the curious, the "last N days" problem was caused by uninitialized data
in the elog handling code. A non-zero-terminated string was read from a file
and passed to atoi(). Here is a simplifed illustration:

char str[256]; // uninitialized, filled with whatever happens on the stack
read(file,str,6); // read 6 bytes, non-zero terminated
// str now looks like this: "123456UUUUUUUUU....", "U" is uninitialized memory
int len = atoi(str); // if the first "U" happens to be a number, we lose.

The obvious fix is to add "str[6]=0" before the atoi() call.

Attached is the CVS diff for the proposed changes. Please comment.

K.O.
    Reply  30 Mar 2004, Stefan Ritt, , elog fixes 
Thanks for fixing these long lasting bugs. The code is much cleaner now, please
commit it.
Entry  19 Jan 2004, Konstantin Olchanski, , darwin aka macosx changes 
I commited the final bits to make Midas build on Darwin aka macosx.

Here is the summary:

1) I treat Darwin as a funny linux, so OS_LINUX is always defined
2) OS_DARWIN is defined for places where the two differ
3) system dependant directory is "midas/darwin/{bin,lib}"
4) a few header files had to be moved around to dodge namespace pollution by Apple system 
header files (i.e. one of the PowerPC header files #defines PVM- collision with PVM in mana.c, 
another #defines Free(x)- collision with ROOT header files)
5) ss_thread_create() and ss_thread_kill() now use midas_thread_t. On Darwin ptherad_t is not 
an "int".
6) the Makefile has no support for building the midas shared library on macosx.
7) on my Mac OS 10.2.8 machine, "make all" works, "odbedit" and "mhttpd" run. This is the 
full extent of my testing. Status on Mac OS 10.3.x is unknown.

K.O.
Entry  14 Jan 2004, Konstantin Olchanski, , First try- midas on darwin/macosx xxx
While watching "The Wizard of Oz", the greatest movie ever made, I took a shot at building 
midas on my macosx computer. After stumbling on a few small and on a few hard problems, I 
built almost everything. However, odb does not work- some further debugging is in order.

Anyway, the easy problems are:
- a few missing header files: pty.h, sys/vfs.h, malloc.h
- a few missing features in system.c (stime(), "get tape position")
- /usr/include/string.h already has strlcpy() & co.
- dbg_malloc() has inconsistent prototypes (size_t vs unsigned int)
- for reasons unknown, PVM is #defined. This flushed a bug in mana.c

A few hard problems:
- namespace pollution by Apple- they #define ALIGN in system headers, colliding with ALIGN 
in midas.h. I was amazed that the two are almost identical, but MIDAS ALIGN aligns to 8 
bytes, while Apple does 4 bytes. ALIGN is used all over the place and I am not sure how to 
reconcile this.
- "timezone" in mhttpd.c. On linux, it's an "int", on darwin, it's a function. What gives?
- building libmidas.a requires running ranlib
- building libmidas.so requires unknown macosx specific magic.

For your enjoyment, the "cvs diff" is attached. The resulting code is known to not work.

K.O.
    Reply  14 Jan 2004, Stefan Ritt, , First try- midas on darwin/macosx 
Great, I got already questions about MacOSX support...

Once it's working, you should commit the changes. But take into account that using "//" for 
comments might cause problems for the VxWorks compiler (talk to Pierre about that!).

> A few hard problems:
> - namespace pollution by Apple- they #define ALIGN in system headers, colliding with ALIGN 
> in midas.h. I was amazed that the two are almost identical, but MIDAS ALIGN aligns to 8 
> bytes, while Apple does 4 bytes. ALIGN is used all over the place and I am not sure how to 
> reconcile this.

You can rename ALIGN to ALIGN8 all over the place.

> - "timezone" in mhttpd.c. On linux, it's an "int", on darwin, it's a function. What gives?

Wrap it into a function get_timezone(). Under linux, just return "timezone", under OSX, 
return timezone() via conditional compiling.

> - building libmidas.a requires running ranlib
> - building libmidas.so requires unknown macosx specific magic.

I guess we should foget for now about the shared libraries (Mac people anyhow have too much 
money so they can affort additional RAM (;-) ), but building the static library is mandatory. 
       Reply  16 Jan 2004, Konstantin Olchanski, , First try- midas on darwin/macosx xxx
> Great, I got already questions about MacOSX support...
> Once it's working, you should commit the changes.

With the ALIGN8() change ODB works, mhttpd works. ALIGN8 change now commited to cvs, verified that "make all" builds 
on Linux.

ROOT stuff still blows up because of more namespace pollution (/usr/include/sys/something does #define Free(x) 
free(blah...)). Arguably, it is not Apple's fault- portable programs should not include any <sys/foo.h> header files. I 
think I can fix it by moving "#include <sys/mount.h>" from midasinc.h to system.h.

Also figured out why PVM is defined- more pollution from "#include <sys/blah...>". This is only in mana.c and I will 
repace every "#ifdef PVM" with "#ifdef HAVE_PVM". Is there documentation that should be updated as well? Alternatively I 
can try to play games with header files...


> But take into account that using "//" for comments might cause problems for the VxWorks compiler (talk to Pierre 
about that!).

Yes, "// comments" stay out of midas. I used them to make the modification more visible.

> You can rename ALIGN to ALIGN8 all over the place.

Done, commited.

> > - "timezone" in mhttpd.c. On linux, it's an "int", on darwin, it's a function. What gives?
> Wrap it into a function get_timezone(). Under linux, just return "timezone", under OSX, 
> return timezone() via conditional compiling.

Right. Still on the todo list.

> > - building libmidas.a requires running ranlib

I still have to cleanup the Makefile. Not commiting it yet.

Then, a new problem- on MacOSX, pthread_t is not an "INT" and system.c:ss_thread_create() whines about it. I want to 
introduce a system dependant THREAD_T (or whatever) and make ss_thread_create() return that, rather than INT.

ROOT stuff is still not fully tested- it takes a little while to build ROOT on a 600MHz laptop.

Attached is my current CVS diff.

K.O.
          Reply  17 Jan 2004, Stefan Ritt, , First try- midas on darwin/macosx 
> With the ALIGN8() change ODB works, mhttpd works. ALIGN8 change now commited to cvs, verified that "make all" builds 
> on Linux.

Verified that "make all" still works under Windows.

> ROOT stuff still blows up because of more namespace pollution (/usr/include/sys/something does #define Free(x) 
> free(blah...)). Arguably, it is not Apple's fault- portable programs should not include any <sys/foo.h> header files. I 
> think I can fix it by moving "#include <sys/mount.h>" from midasinc.h to system.h.

I would like to keep all OS specific #includes in midasinc.h. In worst case put another section there for OSX, like

in midas.h:

#if !defined(OS_MACOSX)
#if defined ( __????__ ) <- put the proper thing here
#define OS_MACOSX
#endif
#endif

then make a new seciton in midasinc.h

#ifdef OS_MACOSX
#include <...>
#endif

> Also figured out why PVM is defined- more pollution from "#include <sys/blah...>". This is only in mana.c and I will 
> repace every "#ifdef PVM" with "#ifdef HAVE_PVM". Is there documentation that should be updated as well? Alternatively I 
> can try to play games with header files...

Right, PVM should be replaced by HAVE_PVM. This is only for the analyzer. I planned at some point to run the analyzer in 
parallel on a linux cluster, but it was never really used. Going to ROOT, that facility should be replaces by PROOF.

> Then, a new problem- on MacOSX, pthread_t is not an "INT" and system.c:ss_thread_create() whines about it. I want to 
> introduce a system dependant THREAD_T (or whatever) and make ss_thread_create() return that, rather than INT.

Good. If you have a OS_MACOSX, that should help you there.

-SR
             Reply  18 Jan 2004, Konstantin Olchanski, , First try- midas on darwin/macosx xxx
> I would like to keep all OS specific #includes in midasinc.h

No go. Here is the problem:

midasinc.h includes sys/mount.h, which #defines Free(x) to be something else
mana.c includes msystem.h, which includes midasinc.h
mana.c includes ROOT header files, which blow up because Free(x) is redefined.

I want this:

mana.c does *not* include sys/mount.h
system.c does include sys/mount.h

Simplest solution is to take sys/mount.h out of midasinc.h and include it in system.c

> Right, PVM should be replaced by HAVE_PVM.

Commited.

> > Then, a new problem- on MacOSX, pthread_t is not an "INT" and system.c:ss_thread_create() whines about it. I want to 
> > introduce a system dependant THREAD_T (or whatever) and make ss_thread_create() return that, rather than INT.
> Good. If you have a OS_MACOSX, that should help you there.

Okey. In Darwin, pthread_t is not an int. It is a pointer to a struct. In midas.c I typedef midas_pthread_t to HANDLE on Windows and to pthread_t n OS_UNIX.

This uncovered a problem with ss_getthandle(). What is it supposed to do? On Windows it returns a handle to the current thread, on OS_UNIX, it returns getpid(). 
What gives? I am leaving it alone for now.

Attached is the current diff. Most changes are in system.c: ss_timezone() and midas_pthread_t. The Makefile part is already commited. Building the shared 
library was made dependant on NEED_SHLIB. Now, building static midas applications is very simple, use "make SHLIB="

K.O.
                Reply  19 Jan 2004, Stefan Ritt, , First try- midas on darwin/macosx 
> I want this:
> 
> mana.c does *not* include sys/mount.h
> system.c does include sys/mount.h
> 
> Simplest solution is to take sys/mount.h out of midasinc.h and include it in system.c

Agree.

> This uncovered a problem with ss_getthandle(). What is it supposed to do? On Windows it returns a handle to the current thread, on OS_UNIX, it returns getpid(). 
> What gives? I am leaving it alone for now.

The Unix version of ss_getthandle() returns the pid since at the time when I wrote that function (many years ago) there were no threads under Unix. It should now 
be replaces with a function which returns the real thread id (at least under Linux).
                   Reply  19 Jan 2004, Konstantin Olchanski, , First try- midas on darwin/macosx 
> > Simplest solution is to take sys/mount.h out of midasinc.h and include it in system.c
> Agree.

Done.

With this, I commited the rest of my changes: midas_thread_t in midas.h, change ss_thread_xxx() prototypes in msystem.h
, implementation in system.c

My cvs diff is now empty.

Midas should compile on Darwin aka macosx, I tested "odbedit" and "mhttpd"- they seem to work.
 
> > This uncovered a problem with ss_getthandle(). 
> The Unix version of ss_getthandle() returns the pid since at the time when I wrote that function (many years ago) there were no threads under Unix. It should now 
> be replaces with a function which returns the real thread id (at least under Linux).

I do not want to touch this. Sorry.

K.O.
Entry  15 Dec 2003, Stefan Ritt, , Poll about default indent style  
Dear all,

there are continuing requests about the C indent style we use in midas. As 
you know, the current style does not comply with any standard. It is even 
a mixture of styles since code comes from different people. To fix this 
once and forever, I am considering using the "indent" program which comes 
with every linux installation. Running indent regularly on all our code 
ensures a consistent look. So I propose (actually the idea came from Paul 
Knowles) to put a new section in the midas makefile:

indent:
        find . -name "*.[hc]" -exec indent <flags> {} \;

so one can easily do a "make indent". The question is now how the <flags> 
should look like. The standard is GNU style, but this deviates from the 
original K&R style such that the opening "{" is put on a new line, which I 
use but most of you do not. The "-kr" style does the standard K&R style, 
but used tabs (which is not good), and does a 4-column indention which is 
I think too much. So I would propose following flags:

indent -kr -nut -i2 -di8 -bad <filename.c>

Please take some of your source code, and format it this way, and let me 
know if these flags are a good combination or if you would like to have 
anything changed. It should also be checked (->PAA) that this style 
complies with the DOC++ system. Once we all agree, I can put it into the 
makefile, execute it and commit the newly formatted code for the whole 
source tree.
    Reply  18 Dec 2003, Paul Knowles, , Poll about default indent style  
Hi Stefan,

> once and forever, I am considering using the "indent" program which comes 
> with every linux installation. Running indent regularly on all our code 
> ensures a consistent look.

I think this can be called a Good Thing.

> The "-kr" style does the standard K&R style, 
> but used tabs (which is not good), and does a 4-column 
> indention which is I think too much. So I would propose 
> following flags:
>        indent -kr -nut -i2 -di8 -bad <filename.c>

(some of this is a repeat from an earlier mail to SR):
You might also want a -l90 for a longer line length than 75
characters.  K&R style with indentation from 5 to 8 spaces
is a good indicator of complexity: as soon as 40 characters
of code wind up unreadably squashed to the right of the
screen, you have to refactor to have less indentation
levels.  This means you wind up rolling up the inner parts
of deeply nested conditionals or loops as separate
functions, making the whole code easier to understand.

I think that setting -i2 is ``going around the problem'' 
of deep nesting.  If you really need to keep the indentation 
tabs less than 4 (8 is ideal) because your code is falling off the 
right edge of the screen, you are indented too deeply.  Why do 
I say that?  There is the famous ``7+-1'' idea that you can hold
in you head only 7 ideas (give or take one) at any time.  I'm not 
that smart and I top out at about 5:  So for example, a conditional 
in a loop  in a conditional in a switch is about as deep a level 
of nesting as  I can easily understand (remember that I also have 
to hold the line i'm working on as well): that's 4 levels, plus one for the
function itself and we are at 40 characters away from the right edge
of the screen using -i8 and have some 40 characters available for writing code
(how often is a line of code really longer than about 40 characters?).
On top of that, the indentation is easily seen so you know immediately 
wheather you are at the upper conditional, or inner conditional.  A -i2
just doesn't make the difference big enough.  -i5 is a happy balance 
with enough visual clue as to the indentation level, but leaves you 50
to 60 characters for the code line itself.

However, if you are indenting very deeply, then the poor reader can't hold
on to the context: there are more than 6 or 7 things to keep in mind.
In those cases, roll up the inner levels as a separate function and 
call it that way. The inner complexity of the nested statements gets 
nicely abstracted and then dumb people like me can understand what 
you are doing.

So, in brief: indent is a good idea, and -in with n>=4 will be best.
I don't think -i2 will lend itself to making the code so much easier 
to read.

thanks for listening.
.p.
       Reply  18 Dec 2003, Stefan Ritt, , Poll about default indent style  
Hi Paul,

I agree with you that a nesting level of more than 4-5 is a bad thing, but I 
believe that throughout the midas code, this level is not exceeded (my poor 
mind also does not hold more than 5 things (;-) ). An indent level of 8 columns 
alone does hot force you too much in not extending the nesting level. I have 
seen code which does that, so there are nesting levels of 8 and more, which 
ends up that the code is smashed to the right side of the screen, where each 
statement is broken into many line since each line only holds 10 or 20 
characters. All the nice real estate on the left side of the scree is lost.

So having said that, I don't feel a strong need of giving up a "-i2", since the 
midas code does not contain deep nesting levels and hopefully will never have. 
In my opinion, a small indent level makes more use of your screen space, since 
you do not have a large white area at the left. A typical nesting level is 3-4, 
which causes already 32 blank charactes at the left, or 1/3 of your screen, 
just for nothing. It will lead to more lines (even with -l90), so people have 
to scroll more.

What do others think (Pierre, Konstantin, Renee) ?
          Reply  01 Jan 2004, Konstantin Olchanski, , Poll about default indent style  
> I don't feel a strong need of giving up a "-i2"...

I am comfortable with the current MIDAS styling convention and I would rather not
have yet another private religious war over the right location for the curley braces.

If we are to consider changing the MIDAS coding convention, I urge all and sundry
to read the ROOT coding convention, as written by Rene Brun and Fons Rademakers at
http://root.cern.ch/root/Conventions.html. The ROOT people did their homework, they
did read the literature and they produced a well considered and well argumented style.

Also, while there, do read the Taligent documentation- by far, one of the most
coherent manuals to C++ programming style.

K.O.
    Reply  06 Jan 2004, Stefan Ritt, , Poll about default indent style  
Ok, taking all comments so far into account, I conclude adopting the ROOT 
coding style would be best for us. So I put

indent:
	find . -name "*.[hc]" -exec indent -kr -nut -i3 {} \;

Into the makefile. Hope everybody is happy now (;-)))
Entry  12 Dec 2003, Stefan Ritt, , Several small fixes and changes 
I committed several small fixes and changes:

- install.txt which mentions explicitly ROOT
- mana.c and the main Makefile which fixes all HBOOK compiler warnings
- mana.c to write an explicit warning if the experiment directoy contains 
uppercase letters in the path (HBOOK does not like this and refuses to 
read/write histos)
- mserver.c, mrpc.c, odb.c to fix a wrong parameter in 
db_remove_open_record() (see previous entry from Paul)
- added experim.h into the dependency of the hbookexpt Makefile
Entry  09 Dec 2003, Paul Knowles, , db_close_record non-local/non-return 
Hi All,

I have found a weird one:

The following code executes on the frontend machine in the
frontend_exit() routine, and connects to the odb running on
another separate machine:
...
     cm_msg(MINFO,__func__, "line %d", __LINE__);

     cm_get_experiment_database(&hdb, NULL);

     cm_msg(MINFO,__func__, "line %d", __LINE__);
     status = db_find_key(hdb, 0, "/Experiment/Run Parameters", &hkey);
     cm_msg(MINFO,__func__, "line %d, hkey=%d, status=%d",
            __LINE__, hkey, status);
     checkstat("db_find_key returned status %d", status);
     cm_msg(MINFO,__func__, "line %d", __LINE__);
     status = db_close_record(hdb, hkey);

     /* NOTREACHED!! the above call to db_close_record
        doesn't return!
      */
     cm_msg(MINFO,__func__, "line %d, status=%d", __LINE__, status);
     checkstat("db_close_record returned status %d", status);

checkstat is a macro that does the following:
#define checkstat(format, arg...)\
do{ if(status != DB_SUCCESS) {\
cm_msg(MERROR, __func__, format, ## arg);\
return FE_ERR_ODB;}}while(0)

The key exists, and the status of the search is 1
(i.e., DB_SUCCESS) and rest of the code tries to run.  What gets
really weird is that the db_close_record _doesn't_ _return_.
The code following the NOTREACHED comment just doesn't get
called.  I get the message from the __LINE__ just in front
of the call, but not the message afterwards (cm_msg and printf 
were tried).  Somehow db_close_record is causing a non-local 
exit or signal or something. No error message is printed and the 
frontend continues to exit with exit code 0.  But, since the rest
of my frontend_exit/odb closing doesn't happen, the odb is left in
a lost state requiring a cleanup.  If I comment out the calls to 
db_close_record, the rest of my frontend_exit runs normally 
and the cm_disconnect_experiment() in mfe.c eventually closes my 
open records correctly (I expect, anyway) and this is the present 
workaround i am using.  The terror i have is that several of my 
hotlinked callback routines will call the close_record routine 
when resetting illegal values.  No end of hilarity will result there...

I was using the same code in the frontend under 1.9.2 and
have only recently upgraded to 1.9.3-? tarball from PAA and 
there were no problems using the 1.9.2 code: this is a 1.9.3
issue.

I have localized the weirdness to what I think is the RPC interface.
Running the nullfrontend (no camac access) on the same machine as 
hosts the ODB I can make the problem appear and disappear in the 
following way:
(odb is local on machine ``monet'')

nullfe -h monet -e acqmonad     : db_close_record will get lost

nullfe -e acqmonad              : db_close_record works as expected.

I've tried also with the patch for the 256 byte odb string bug since
many of the open records have strings of that length, but that isn't
it. The only substancial looking change to mserver from 1.9.2 to 1.9.3
is the SIGPIPE ignore and that doesn't look like a good candidate either.
Can this be that some of the 
   #IFDEF LOCAL_ROUTINES
that got moved about in odb.c and others
are causing the remote call to get confused?

Clearly the answer is to just use stable and happy 1.9.2, but the 
people for whom I am working now really want to use ROOT for
an analyzer...


cheers,
.p.

Paul Knowles.                   phone: 41 26 300 90 64
email: Paul.Knowles@unifr.ch      Fax: 41 26 300 97 47
finger me at pexppc33.unifr.ch for more contact information
    Reply  12 Dec 2003, Stefan Ritt, , db_close_record non-local/non-return 
Hi Paul,

sorry my late reply, I had to find some time for debugging your problem. 
Thank you very much for the detailed description of the problem, I wish all 
bug reports would be such elaborate!

You were right that there was a bug in the RPC system. The function 
db_remove_open_record() got a new parameter recently, which was not changed 
in the RPC call, and caused the mserver side to crash on any 
db_close_record() call.

I fixed it and the update is under CVS (http://midas.psi.ch/cgi-
bin/cvsweb/midas/src/). Since you need to update many files, I wonder if I 
should enable anonymous CVS read access. Does anybody know how to set this 
up using "ssh" as the protocol (via CVS_RSH=ssh)?

Please note that db_close_record() is not necessary as 
cm_disconnect_experiment() takes care of this, but having it there does not 
hurt.
Entry  20 Nov 2003, Stefan Ritt, , Implementation of db_check_record() 
As Konstantin pointed out correctly, the db_create_record() call is pretty 
heavy since it copies whole structures around the ODB. Therefore, it 
should not used frequently. It might be that several problems are caused 
by that, for example the "phantom" records reported in elog:40 .

I have therefore implemented the function 

db_check_record(HNDLE hDB, HNDLE hKey, char *keyname, char *rec_str, 
                BOOL correct)

which takes an ASCII structure in the same way as db_create_record(), but 
only checks this ASCII structure against the ODB contents without writing 
anything to the ODB. 

If the record does not exist at all, it is created via db_create_record(). 
This is useful for example with the /Runinfo structure on a virgin ODB.

If the parameter "correct" is FALSE, the function returns 
DB_STRUCT_MISMATCH if the ODB contents is wrong (wrong order of variables, 
wrong name of variables, wrong type or array size). The calling function 
should then abort, since a subsequent db_open_record() would fail. Note 
that although abort() is useful, one should add cm_disconnect_experiment() 
just before the abort() in order to have the application "log out" from 
the ODB gracefully. If the parameter "correct" is TRUE, the function 
db_create_record() is called internally to correct a mismatching record.

I have changed most calls of db_create_record() in mhttpd.c, mfe.c, mana.c 
and mlogger.c. Pierre, could you do the same for lazylogger.c?

I also started to put assert()'s everywhere and encourage everyone to 
follow. Under Windows, the asserts() are removed automatically if 
compiling in "Release" mode.

So I committed many changes, did some quick tests, but am not 100% 
convinced that all the changes are good. So please use the new code 
cautiously, and let me know if there is any new problem. I also would like 
to get some feedback if the whole thing becomes more stable now.
    Reply  27 Nov 2003, Konstantin Olchanski, , Implementation of db_check_record() 
> I have therefore implemented the function 
> db_check_record(HNDLE hDB, HNDLE hKey, char *keyname, char *rec_str, BOOL
correct)

Stephan, something is very wrong with the new code. My
"/logger/channels/0/settings" is being destroyed on "begin run". Midas
checkout from october 31st is okey. This is a show stopper, but I am in a rush
and cannot debug it. I am falling back to the Oct 31st version... K.O.
       Reply  30 Nov 2003, Konstantin Olchanski, , Implementation of db_check_record() 
> > I have therefore implemented the function 
> > db_check_record(HNDLE hDB, HNDLE hKey, char *keyname, char *rec_str, BOOL
> correct)
> 
> Stephan, something is very wrong with the new code. My
> "/logger/channels/0/settings" is being destroyed on "begin run".

Okey. I found the problem in db_check_record(): when we decide that we have a
mismatch, we call db_create_record(...,rec_str), but by this time, rec_str no
longer points to the beginning of the ODB string because we started parsing it.

I tried this solution: save rec_str into rec_str_orig, then when we decide that
we have a mismatch, call db_create_record() with this saved rec_str_orig. It
fixes my immediate problem (destruction of "/logger/channels/0/settings"), but is
it correct?

I would like to fix it ASAP to get cvs-head working again: our mhttpd dumps core
on an assert() failure in db_create_record() and the set of db_check_record()
changes might fix it for me.

Here is the CVS diff:

RCS file: /usr/local/cvsroot/midas/src/odb.c,v
retrieving revision 1.73
diff -r1.73 odb.c
7810a7811
> char             *rec_str_orig = rec_str;
7820c7821
<     return db_create_record(hDB, hKey, keyname, rec_str);
---
>     return db_create_record(hDB, hKey, keyname, rec_str_orig);
7838c7839
<       return db_create_record(hDB, hKey, keyname, rec_str);
---
>       return db_create_record(hDB, hKey, keyname, rec_str_orig);
8023c8024
<               return db_create_record(hDB, hKey, keyname, rec_str);
---
>               return db_create_record(hDB, hKey, keyname, rec_str_orig);
8037c8038
<               return db_create_record(hDB, hKey, keyname, rec_str);
---
>               return db_create_record(hDB, hKey, keyname, rec_str_orig);

K.O.
          Reply  30 Nov 2003, Stefan Ritt, , Implementation of db_check_record() 
Fixed and committed. Can you check if it's working?
             Reply  01 Dec 2003, Konstantin Olchanski, , Implementation of db_check_record() 
> Fixed and committed. Can you check if it's working?
Yes, it is fixed. Thanks. K.O.
Entry  25 Nov 2003, Suzannah Daviel, , delete key followed by create record leads to empty structure in experim.h 
Hi,

I have noticed a problem with deleting a key to an array in odb, then
recreating the record as in the code below. The record is recreated
successfully, but when viewing it with mhttpd, a spurious blank line
(coloured orange) is visible, followed by the rest of the data as normal.

This blank line causes trouble with experim.h because it
produces an empty structure e.g. :

#define CYCLE_SCALERS_SETTINGS_DEFINED

typedef struct {
  struct {
  } ;
  char      names[60][32];
} CYCLE_SCALERS_SETTINGS;

rather than :

#define CYCLE_SCALERS_SETTINGS_DEFINED

typedef struct {
  char      names[60][32];
} CYCLE_SCALERS_SETTINGS;


This empty structure causes a compilation error when rebuilding clients that
use experim.h

SD



 CYCLE_SCALERS_TYPE1_SETTINGS_STR(type1_str);
 CYCLE_SCALERS_TYPE2_SETTINGS_STR(type2_str);

Both type1_str and type2_str have been defined as in
experim.h
i.e.
#define CYCLE_SCALERS_TYPE1_SETTINGS_STR(_name) char *_name[] = {\
"[.]",\
"Names = STRING[60] :",\
"[32] Back%BSeg00",\
"[32] Back%BSeg01",\
 ........
 ........
"[32] General%NeutBm Cycle Sum",\
"[32] General%NeutBm Cycle Asym",\
"",\
NULL }

#define CYCLE_SCALERS_TYPE2_SETTINGS_STR(_name) char *_name[] = {\
"[.]",\
"Names = STRING[60] :",\
"[32] Back%BSeg00",\
"[32] Back%BSeg01",\
...........
............
"[32] General%B/F Cumul -",\
"[32] General%Asym Cumul -",\
"",\
NULL }

  if (db_find_key(hDB, 0, "/Equipment/Cycle_scalers/Settings/",&hKey) ==
DB_SUCCESS)
    db_delete_key(hDB,hKey,FALSE);
          
  if (  strncmp(fs.input.experiment_name,"1",1) == 0) {
      exp_mode = 1; /* Imusr type - scans */
      status =
db_create_record(hDB,0,"/Equipment/Cycle_scalers/Settings/",strcomb(type1_str));
    }
  else {
    exp_mode = 2; /* TDmusr types - noscans */
    status =
db_create_record(hDB,0,"/Equipment/Cycle_scalers/Settings/",strcomb(type2_str));
  }
    Reply  01 Dec 2003, Stefan Ritt, , delete key followed by create record leads to empty structure in experim.h 
> I have noticed a problem with deleting a key to an array in odb, then
> recreating the record as in the code below. The record is recreated
> successfully, but when viewing it with mhttpd, a spurious blank line
> (coloured orange) is visible, followed by the rest of the data as normal.
> 
> db_create_record(hDB,0,"/Equipment/Cycle_scalers/Settings/",strcomb(type1_str));
>     }
>   else {
>     exp_mode = 2; /* TDmusr types - noscans */
>     status =
> db_create_record(hDB,0,"/Equipment/Cycle_scalers/Settings/",strcomb(type2_str));
>   }

The first problem is that the db_create_record has a trailing "/" in the key name 
after Settings. This causes the (empty) subsirectory which causes your trouble. 
Simple removing it fixes the problem. I agree that this is not obvious, so I 
added some code in db_create_record() which removes such a trailing slash if 
present. New version under CVS.

Second, the db_create_record() call is deprecated. You should use the new 
function db_check_record() instead, and remove your db_delete_key(). This avoids 
possible ODB trouble since the structure is not re-created each time, but only 
when necessary.

- Stefan
Entry  30 Nov 2003, Konstantin Olchanski, , bad call to cm_cleanup() in fal.c 
fal.c does not compile: it calls cm_cleanup() with one argument when there
should be two arguments. K.O.
    Reply  30 Nov 2003, Stefan Ritt, , bad call to cm_cleanup() in fal.c 
> fal.c does not compile: it calls cm_cleanup() with one argument when there
> should be two arguments. K.O.

Fixed and committed.
Entry  20 Nov 2003, Konstantin Olchanski, , midas timeout wraparound 
While reviving midas on midtig01 after it was not used for a while, we see
this. Notice negative "last called" numbers. Looks like a time_t wraparound
somewhere...

[local:tigress:S]/>scl -w
Name                Host                Timeout    Last called
mhttpd              midtig01.triumf.ca  10000      -2037131082
Logger              midtig01.triumf.ca  10000      -2037131166
Analyzer            midtig01.triumf.ca  10000      -2037131048
JACQ                midtig01.triumf.ca  10000      -2037131667
mhttpd1             midtig01.triumf.ca  10000      325
ODBEdit             midtig01.triumf.ca  10000      829

K.O.
    Reply  20 Nov 2003, Konstantin Olchanski, , cannot shutdown defunct clients 
> While reviving midas on midtig01 after it was not used for a while ... 
> [local:tigress:S]/>scl -w
> Name                Host                Timeout    Last called
> mhttpd              midtig01.triumf.ca  10000      -2037131082

These clients cannot be deleted. I tried:
1) shutdown from mhttpd "programs" page -> "cannot shutdown client"
2) "sh mhttpd" from odbedit -> 
   [midas.c:5298:cm_shutdown] cannot connect to client mhttpd on host
   midtig01.triumf.ca, port 32853
   Client mhttpd not active
3) in odbedit: "cd /system/clients; rm xxxx"
   refuses to delete the key

Lacking any better ideas, I deleted them via brain surgery on the odb file:
1) stop everything
2) ipcrm the SYSV shared memory segment
3) odbedit -> save xxx.odb
4) xemacs xxx.odb, delete offending odb entries
5) rm .ODB.SHM
6) odbedit -> load xxx.odb
7) voila, bad clients gone, gone, gone.

K.O.
       Reply  20 Nov 2003, Stefan Ritt, , cannot shutdown defunct clients 
> 1) shutdown from mhttpd "programs" page -> "cannot shutdown client"
> 2) "sh mhttpd" from odbedit -> 
>    [midas.c:5298:cm_shutdown] cannot connect to client mhttpd on host
>    midtig01.triumf.ca, port 32853
>    Client mhttpd not active
> 3) in odbedit: "cd /system/clients; rm xxxx"
>    refuses to delete the key

Have you tried a "cleanup" in ODBEdit?

The "last_activity" is a 32-bit int, filled with milliseconds. So indeed it 
wraps around after about one month. So if a all clients are stopped 
simultaneously the hard way (such that nobody's watchdog can clean any other 
client from the ODB), like with a power off, and you start the thing one 
month later, there might be a problem. I never tried that before. So next 
time to a cleanup. If that does not help, we should change last_activity 
from INT to DWORD. This way it's alway positive and the wraparound does not 
hurt.
          Reply  20 Nov 2003, Konstantin Olchanski, , cannot shutdown defunct clients 
> > 1) shutdown from mhttpd "programs" page -> "cannot shutdown client"
> Have you tried a "cleanup" in ODBEdit?

Nope. Will try next time...

> The "last_activity" is a 32-bit int, filled with milliseconds. So indeed it 
> wraps around after about one month.... change last_activity 
> from INT to DWORD. This way it's alway positive and the wraparound does not 
> hurt.

INT == "int", wraparound in 1 month
DWORD == "unsigned int", wraparound in 2 months

should we make it the 64-bit "long long" (or C98's "int64_t")?

K.O.
             Reply  20 Nov 2003, Stefan Ritt, , cannot shutdown defunct clients 
> INT == "int", wraparound in 1 month
> DWORD == "unsigned int", wraparound in 2 months
> 
> should we make it the 64-bit "long long" (or C98's "int64_t")?

Won't work on all supported compilers. The point is that DWORD wraps around in 
2 months, but the difference of two DWORDs is alywas positive, never negative 
like you had it. We only have to distinguish if the difference of the current 
time (im ms) minus the last_activity of a client is larget than the timeout, 
typically 10 seconds or so. If you have a wraparound on 32-bit DWORD, the 
difference is still ok. Like

current "time" : 0x0000 0100
last_activity:   0xFFFF FF00

then current_time - last_activity = 0x00000100 - 0xFFFFFF00 = 0x00000200 if 
calculated with 32-bit values.
             Reply  20 Nov 2003, Renee Poutissou, , cannot shutdown defunct clients 
Indeed the ODB command "cleanup" really works. I have used it several
times with the TWIST DAQ and regularly with the BNMR/MUSR setups where
we have these stubborn clients (ie feepics) that do not want to shutdown
cleanly.  
But there is one problem with "cleanup". It has a hardwired timeout of
2 seconds.  This is a problem for tasks like lazylogger which set a timeout
of 60 seconds when moving the tape. So BEWARE, if you issue the "cleanup"
command, it might kill some clients who have setup their timeout to longer
than 2 seconds. 

I have asked Stefan to change this before. He said that, to be effective,
the timeout value used for "cleanup" has to be rather short. 
One possibility, would be to allow for a user entered "cleanup" timeout.
The default could stay at 2 seconds. 




> > Have you tried a "cleanup" in ODBEdit?
> 
> Nope. Will try next time...
> 
                Reply  24 Nov 2003, Stefan Ritt, , cannot shutdown defunct clients 
> But there is one problem with "cleanup". It has a hardwired timeout of
> 2 seconds.  This is a problem for tasks like lazylogger which set a timeout
> of 60 seconds when moving the tape. So BEWARE, if you issue the "cleanup"
> command, it might kill some clients who have setup their timeout to longer
> than 2 seconds. 
> 
> I have asked Stefan to change this before. He said that, to be effective,
> the timeout value used for "cleanup" has to be rather short. 
> One possibility, would be to allow for a user entered "cleanup" timeout.
> The default could stay at 2 seconds. 

I changed the behaviour of cleanup by adding an extra parameter 
ignore_timeout to cm_cleanup(). Now, in ODBEdit, a "clanup" obeys the 
timeout set by the clients. The problem with that is if the logger crashes 
for example, and it's timeout is set o 5 min., it cannot be clean-up'ed any 
more for the next five minutes, and therefor not be restarted wasting 
precious beam time. That's why I hard-wired originally the "cleanup" timout 
to 2 sec. Now I added a flag "-f" to the ODBEdit cleanup command which works 
in the old fashion with a 2 sec. timeout. So a "cleanup" alone won't kill a 
looger which currently rewinds a tape or so, but a "cleanup -f" does.

I also changed internal timeouts from INT to DWORD, which should fix the 
problem Konstantin reported recently (re-starting an experiment after 
several weeks). New changes are commited, but I only did basic tests. So 
please try the new code and tell me if there is any problem.

- Stefan
Entry  17 Nov 2003, Stefan Ritt, , Revised MVMESTD 
Let me propose a revised scheme for midas standard VME calls (mvmestd.h). 

Pierre mentioned some limitations before, and I find now also some fields 
to improve. Right now, the vme_open() call retrieves a handle. For some 
interfaces (like SBS/Bit3), one has to obtain separate handles for 
different addressing modes A24D32/A32D32 and so on, which I find a bit 
troublesome. I would rather keep the handle internally, invisible to the 
user, and use ioctl() statments to change the address/data mode. 

So the API could look like:

vme_open()       Deprecated, will be removed
vme_init(void)   Standard initialization, open device(s), stores handles
                 internally in a table
vme_exit(void)   Deallocates any memory, close handles

vme_read(void *dst, DWORD vme_addr, DWORD size)
vme_write(void *src, DWORD vme_addr, DWORD size)

vme_ioctl(int request, int *param)

                 Request is one of 
                   VME_IOCTL_CRATE_SET/GET
                     Sets VME crate (in case several interfaces are
                     plugged into singlePC, meaningless for embedded CPUs)
                   VME_IOCTL_DEST_SET/GET
                     VME_BUS/VME_RAM/VME_LM for VME bus, RAM in VME 
                     interface, or LM for local memory (used in Bit3 
                     interface)
                   VME_IOCTL_AMOD_SET/GET
                     Sets/Retrieves VME AMOD (= VME_AMOD_xxx as currently
                     defined in mvmestd.h)
                   VME_IOCTL_DSIZE_SET/GET
                     Sets/Retrieves VME data size (D8/D16/D32/D64)
                   VME_IOCTL_DMA_SET/GET
                     Enable/Disable DMA, should be independent of AMOD
                   VME_IOCTL_INTR_ATTACH/DETACH/ENABLE/DISABLE
                     Set VME interrupts
                   VME_IOCTL_AUTO_INCR_SET/GET
                     Set autoincremet of source pointer, can be disabled
                     for FIFO readout

vme_mmap(void **ptr, DWORD vme_addr, DWORD size)
vme_unmap(void *ptr, DWORD size)
                  Map/Unmap VME to local memory

vme_read2(void *dst, DWORD vme_addr, DWORD size, DWORD flags)
vme_write2(void *src, DWORD vme_addr, DWORD size, DWORD flags)
                 With these functions one can directly specify the flags
                 usually managed by vme_ioctl(). Usefule for applications
                 where the address modifier for example has to be
                 different in each read/write operation.  

Note that the vme_read/write functions do not have a VME handle any more, 
nor an address modifier. This is all accomplished with vme_ioctl() calls.

Please have a look at this proposal, compare it with what you do currently 
in VME, and let me know if we should add/modify something. I volunteer to 
implement the API for the SBS/Bit3 617 and the Struck SIS1100/3100 
interfaces, for VxWorks somebody at TRIUMF should take care.
    Reply  20 Nov 2003, Pierre-André Amaudruz, Konstantin Olchanski, , Revised MVMESTD 
Before we try to merge the different access scheme for the different VME hardware,
we present the "optimal" configuration for the VMIC setup. This is a first shot so take it
with caution.
From these definitions, we should be able to workout a compromise and come up with
a satisfactory standard.

A) The VMIC vme_slave_xxx() options are not considered.
B) The interrupt handling can certainly match the 4 entries required in the user frontend
    code i.e. Attach, Detach, Enable, Disable.

I don't understand your argument that the handle should be hidden. In case of multiple
interfaces, how do you refer to a particular one if not specified? 
The following scheme does require a handle for refering to the proper (device AND window).

1 ) deviceHandle = vme_init(int devNumber);
    Even though the VMIC doesn't deal with multiple devices,
    the SIS/PCI does and needs to init on a specific PCI card.
    Internally:
      opening of the device (/dev/sisxxxx_1) (ignored in case of VMIC).
      Possible including a mapping to a default VME region of default size with default AM
      (VMIC :16MB, A24). This way in a single call you get a valid handle for full VME access
      in A24 mode. Needs to be elaborate this option. But in principle you need to declare the 
     VME region that you want to work on (vme_map).

2) mapHandle = vme_map(int deviceHandle, int vmeAddress, int am, int size);
    Return a mapHandle specific to a device and window. The am has to be specified.
    What ever are the operation to get there, the mapHandle is a reference to thas setting.
    It could just fill a map structure.
    Internally:
      WindowHandle[deviceHandle] = vme_master_create(BusHandle[deviceHandle], ...
      WindowPtr[WindowHandle] = vme_master_window(BusHandle[deviceHandle]
                                                                           , WindowHandle[deviceHandle]...

3) vme_setmode(mapHandle, const int DATA_SIZE, const int AM
                           , const BOOL ENA_DMA, const BOOL ENA_FIFO);
    Mainly used for the vme_block_read/write. Define for following read the data size and 
    am in case of DMA (could use orther DMA mode than window definition for optimal
    transfer).

    Predefine the mode of access:
    DATA_SIZE : D8, D16, D32
    AM             : A16, A24, A32, etc...
    enaDMA     : optional if available.
    enaFIFO     : optional for block read for autoincrement source pointer.

Remark:
PAA- I can imagine this function to be a vme_ioctl (int mapHandle, int *param)
        such that extension of functionality is possible. But by passing cons int
        arguments, the optimizer is able to substitute and reduce the internal code.

4)   
   uint_8Value   = vme_readD8  (int mapHandle, uint_64 vmeSrceOffset)
   uint_16Value = vme_readD16 (int mapHandle, uint_64 vmeSrceOffset)
   uint_32Value = vme_readD32 (int mapHandle, uint_64 vmeSrceOffset)
   Single VME read access. In the VMIC case, this access is always through mapping.
   Value = *(WindowPtr[WindowHandle] + vmeSrceOffset) 
   or 
   Value = *(WindowStruct->WindowPtr[WindowHandle] + vmeSrceOffset) 
 
5)   
   status  = vme_writeD8   (int mapHandle, uint_64 vmeSrceOffset, uint_8 Value)
   status  = vme_writeD16 (int mapHandle, uint_64 vmeSrceOffset, uint_16 Value)
   status  = vme_writeD32 (int mapHandle, uint_64 vmeSrceOffset, uint_32 Value)
   Single VME write access.

6)
   nBytes = vme_block_read(mapHandle, char * pDest, uint_64 vmeSrceOffset, int size);
   Multiple read access. Can be done through standard do loop or DMA if available.
   nBytes < 0 :  error
   Incremented pDest  = (pDest + nBytes); Don't need to pass **pDest for autoincrement.

7)
   nBytes = vme_block_write(mapHandle, uint_64 vmeSrceOffset, char *pSrce, int size);
   Multiple write access.
   nBytes < 0 :  error
   Incremented pSrce  = (pSrce + nBytes); Don't need to pass **pSrce for autoincrement.

8) status = vme_unmap(int mapHandle)
   Cleanup internal pointers or structure of given mapHandle only.

9) status = vme_exit()
   Cleanup deviceHandle and release device.
       Reply  21 Nov 2003, Stefan Ritt, , Revised MVMESTD 
Thanks for your contribution. Let me try to map your functionality to mvmestd calls:

> A) The VMIC vme_slave_xxx() options are not considered.

We could maybe do that through mvme_mmap(SLAVE, ...) instead of mvme_mmap(MASTER, ...)

> B) The interrupt handling can certainly match the 4 entries required in the user frontend
>     code i.e. Attach, Detach, Enable, Disable.

vmve_ioctl(VME_IOCTL_INTR_ATTACHE/DETACH/ENABLE/DISABLE, func())

> I don't understand your argument that the handle should be hidden. In case of multiple
> interfaces, how do you refer to a particular one if not specified? 
> The following scheme does require a handle for refering to the proper (device AND window).

Four reasons for that:

1) For the SBS/Bit3, you need a handle for each address mode. So if I have two crates (and I do in our 
current experiment), and have to access modules in A16, A24 and A32 mode, I need in total 6 handles. 
Sometimes I mix them up by mistake, and wonder why I get bus errors. 

2) Most installations will only have single crates (as your VMIC). So if there is only one crate, why 
bother with a handle? If you have hunderds of accesses in your code, you save some redundant typing work.

3) A handle is usually kept global, which is considered not good coding style.

4) Our MCSTD and MFBSTD functions also do not use a handle, so people used to those libraries will find it 
more natural not to use one.

> 1 ) deviceHandle = vme_init(int devNumber);
>     Even though the VMIC doesn't deal with multiple devices,
>     the SIS/PCI does and needs to init on a specific PCI card.
>     Internally:
>       opening of the device (/dev/sisxxxx_1) (ignored in case of VMIC).
>       Possible including a mapping to a default VME region of default size with default AM
>       (VMIC :16MB, A24). This way in a single call you get a valid handle for full VME access
>       in A24 mode. Needs to be elaborate this option. But in principle you need to declare the 
>      VME region that you want to work on (vme_map).

Just vme_init(); (like fb_init()).

This function takes the first device, opens it, and stores the handle internally. Sets the AM to a default 
value, and creates a mapping table which is initially empty or mapped to a default VME region. If one wants 
to access a secondary crate, one does a vme_ioctl(VME_IOCTL_CRATE_SET, 2), which opens the secondary crate, 
and stores the new handle in the internal table if applicable.

> 2) mapHandle = vme_map(int deviceHandle, int vmeAddress, int am, int size);
>     Return a mapHandle specific to a device and window. The am has to be specified.
>     What ever are the operation to get there, the mapHandle is a reference to thas setting.
>     It could just fill a map structure.
>     Internally:
>       WindowHandle[deviceHandle] = vme_master_create(BusHandle[deviceHandle], ...
>       WindowPtr[WindowHandle] = vme_master_window(BusHandle[deviceHandle]
>                               , WindowHandle[deviceHandle]...

The best would be if a mvme_read(...) to an unmapped region would automatically (internally) trigger a 
vme_map() call, and store the WindowHandle and WindowPtr internally. The advantage of this is that code 
written for the SIS for example (which does not require this kind of mapping) would work without change 
under the VMIC. The disadvantage is that for each mvme_read(), the code has to scan the internal mapping 
table to find the proper window handle. Now I don't know how much overhead this would be, but I guess a 
single for() loop over a couple of entries in the mapping table is still faster than a microsecond or so, 
thus making it negligible in a block transfer. 

> 3) vme_setmode(mapHandle, const int DATA_SIZE, const int AM
>                            , const BOOL ENA_DMA, const BOOL ENA_FIFO);
>     Mainly used for the vme_block_read/write. Define for following read the data size and 
>     am in case of DMA (could use orther DMA mode than window definition for optimal
>     transfer).
> 
>     Predefine the mode of access:
>     DATA_SIZE : D8, D16, D32
>     AM             : A16, A24, A32, etc...
>     enaDMA     : optional if available.
>     enaFIFO     : optional for block read for autoincrement source pointer.
> 
> Remark:
> PAA- I can imagine this function to be a vme_ioctl (int mapHandle, int *param)
>         such that extension of functionality is possible. But by passing cons int
>         arguments, the optimizer is able to substitute and reduce the internal code.

Right. mvme_ioctl(VME_IOCTL_AMOD_SET/DSIZE_SET/DMA_SET/AUTO_INCR_SET, ...)

>    uint_8Value   = vme_readD8  (int mapHandle, uint_64 vmeSrceOffset)
>    uint_16Value = vme_readD16 (int mapHandle, uint_64 vmeSrceOffset)
>    uint_32Value = vme_readD32 (int mapHandle, uint_64 vmeSrceOffset)
>    Single VME read access. In the VMIC case, this access is always through mapping.
>    Value = *(WindowPtr[WindowHandle] + vmeSrceOffset) 
>    or 
>    Value = *(WindowStruct->WindowPtr[WindowHandle] + vmeSrceOffset) 

mvme_read(*dst, DWORD vme_addr, DWORD size); would cover this in a single call. Note that the SIS for 
example does not have memory mapping, so if one consistently uses mvme_read(), it will work on both 
architectures. Again, this takes some overhead. Consider for example a possible VMIC implementation

mvme_read(char *dst, DWORD vme_addr, DWORD size)
{
  for (i=0 ; table[i].valid ; i++)
    {
    if (table[i].start >= vme_addr && table[i].end < vme_addr+size)
      break;
    }

  if (!table[i].valid)
    {
    vme_master_crate(...)
    table[i].window_handle = vme_master_window(...)
    }

  if (size == 2)
    mvme_ioctl(VME_IOCTL_DSIZE_SET, D16);
  else if (size == 1)
    mvme_ioctl(VME_IOCTL_DSIZE_SET, D8);

  memcpy(dst, table[i].window_handle + vme_addr - table[i].start, size);
}

Note this is only some rough code, would need more checking etc. But you see that for each access the for() 
loop has to be evaluated. Now I know that for the SBS/Bit3 and for the SIS a single VME access takes 
~0.5us. So the for() loop could be much faster than that. But one has to try. If one experiment needs the 
ultimate speed, it can use the native VMIC API, but then looses the portability. I'm not sure if one needs 
the automatic DSIZE_SET, maybe it works without.

>    status  = vme_writeD8   (int mapHandle, uint_64 vmeSrceOffset, uint_8 Value)
>    status  = vme_writeD16 (int mapHandle, uint_64 vmeSrceOffset, uint_16 Value)
>    status  = vme_writeD32 (int mapHandle, uint_64 vmeSrceOffset, uint_32 Value)
>    Single VME write access.

Dito. mvme_write(void *dst, DWORD vme_addr, DWORD size);

>    nBytes = vme_block_read(mapHandle, char * pDest, uint_64 vmeSrceOffset, int size);
>    Multiple read access. Can be done through standard do loop or DMA if available.
>    nBytes < 0 :  error
>    Incremented pDest  = (pDest + nBytes); Don't need to pass **pDest for autoincrement.

vmve_ioctl(VME_IOCTL_DMA_SET, TRUE);
n = mvme_read(char *pDest, DWORD vmd_addr, DWORD size);

>    nBytes = vme_block_write(mapHandle, uint_64 vmeSrceOffset, char *pSrce, int size);
>    Multiple write access.
>    nBytes < 0 :  error
>    Incremented pSrce  = (pSrce + nBytes); Don't need to pass **pSrce for autoincrement.

Dito.

> 8) status = vme_unmap(int mapHandle)
>    Cleanup internal pointers or structure of given mapHandle only.

mvme_unmap(DWORD vme_addr, DWORD size)

Scan through internal table to find handle, then calls vme_unmap(mapHandle);

> 9) status = vme_exit()
>    Cleanup deviceHandle and release device.

mvme_exit();

Let me know if this all makes sense to you...

- Stefan
Entry  20 Nov 2003, Konstantin Olchanski, , set-uid-root midas programs 
I see that MIDAS installs several set-uid-root programs into /usr/local/bin.
In this age and time of evil computer hackers, this is not a good idea and
we should Do Something (TM) about it. Here is my risk assessment:

[olchansk@midtis06 midas]$ ls -l /usr/local/bin | grep wsr
-rwsr-sr-x    1 root     root        25811 Nov 20 09:27 dio
-rwsr-sr-x    1 root     root       344553 Nov 20 09:27 mhttpd
-rwsr-sr-x    1 root     root        70736 Nov 20 09:27 webpaw

dio- is required to be setuid-root to gain I/O permissions. I looked at it a
few times, and it is probably safe, but I would like to get a second
opinion. Stephan, can you should it to your local security geeks?

mhttpd- definitely unsafe. It has more buffer overflows than I can shake a
stick at. Why is it suid-root anyway?

webpaw- what is it?!?

K.O.
    Reply  20 Nov 2003, Stefan Ritt, , set-uid-root midas programs 
> dio- is required to be setuid-root to gain I/O permissions. I looked at it a
> few times, and it is probably safe, but I would like to get a second
> opinion. Stephan, can you should it to your local security geeks?
> 
> mhttpd- definitely unsafe. It has more buffer overflows than I can shake a
> stick at. Why is it suid-root anyway?
> 
> webpaw- what is it?!?

dio was written by Pierre. 

mhttpd and webpaw both are web servers. webpaw is used to display PAW 
pictures over the web. If you run these programs at a port <1024, and most 
people do run them at port 80 (at least at PSI), they need to be setuid-root. 
Unless you know a better way to do that...
Entry  15 Nov 2003, Konstantin Olchanski, , Phantom "open records" 
Sometimes (maybe after a client uncleanly exits?), I see phantom "open
records", for example:
[local:twist:Running]Gas>sor
/Equipment/Gas/Common open 2 times by fe1hp 
/Equipment/Gas/Variables open 1 times by Logger 
/Equipment/Gas/Variables/Flow1 open 2 times by uBeamTcl1 uBeamTcl 
/Equipment/Gas/Settings/Command open 2 times by fe1hp 
/Equipment/Gas/Statistics open 1 times by 

Note the blank client name in the "/Equipment/Gas/Statistics" line.

This causes these warnings from mfe.c:
Cannot init equipment record, probably other FE is using it
Cannot delete statistics record, error 320
Cannot create statistics record, error 320
Cannot open statistics record, error 318. Probably other FE is using it

Then the number of generated events for this front end is never incremented.

Also attempts to delete this "open" record fail:
[local:twist:Running]Gas>del /Equipment/Gas/Statistics
Are you sure to delete the key
"/Equipment/Gas/Statistics"
and all its subkeys? (y/[n]) y
key is open by other client

How do I go about writing the db_validate_xxx() code to cleanup this
bogosity? I am not too familiar with the implementation of "open record"...

K.O.
    Reply  16 Nov 2003, Stefan Ritt, , Phantom  
I have seen the same behaviour and it annoys me, too. What I did in the past 
is a "cleanup" in ODBEdit which removes these open records. I have soem code 
in cm_watchdog(), which should take care of that. If a client is dead, it 
gets removed from the ODB, and its open records should get its notify_count 
decremented. So obviously this code has some bug. I plan to do in the 
following week (now I got some spare time) the following:

- exchange most db_create_record() by something better. Maybe 
db_check_record(..., correct_flag), which creates the record only if it does 
not exist at all, otherwise checks the structure. If correct_flag is TRUE, it 
corrects the strucure (by calling db_create_record()), if it's false it just 
returns an error code. This way one can decide from case to case which option 
is better. Like for the /Runinfo, the flag would be FALSE, maybe with a 
notification that the /Runinfo is different from the compiled-in structure, 
and one hast to recompile the application.

- revisit the open record issue from dying frontends. I remember vaguely that 
I tried to kill a frontend (kill -9), wait until the watchdog cleans up its 
entries, and it worked fine. So it's more the problem to reproduce the issue 
described in the previous elog entry. 
       Reply  20 Nov 2003, Stefan Ritt, , Phantom  
I tried to reproduce the problem, but without success. So in case this happens 
again, one should debug the code im cm_watchdog() next to the line

/* decrement notify_count for open records and clear exclusive mode */
...

So if a killed client is removed from the ODB via the watchdog (or a "cleanup" 
is done in ODBEdit), the notify_count should be decreased and thus the "open 
records" should be closed.
Entry  17 Nov 2003, Pierre-André Amaudruz, , Lazylogger application 
- Remove temporary "/Programs/Lazy" creation.
- Fix Rate calculation for Web display.
- Change FTP channel description (see help).
Entry  31 Oct 2003, Konstantin Olchanski, , more odb "run number" error checking 
I added error checking to the places where we read "/runinfo/run number". In
general, I do this:

  status = db_get_value("/runinfo/run number",&run_number);
  assert(status==SUCCESS);
  assert(run_number >= 0); (and run_number>0, where appropriate)

Here is the rationale: if we cannot read the run number, something must be
very terribly wrong. I cannot think of any recovery action other than
abort() and make a core dump for our debugging enjoyment.

I considered and rejected adding a "retry" loop: if we allow db_get_value()
to intermittently fail, then it's every use has to be wrapped in a retry
loop, which then should be inside db_get_value(), making it pointless to
have external "retry" loops.

I am now pondering on proposing a "db_get_value_cannot_possibly_fail()"
function (it would abort(), exit() with an error or commit harakiri if it
can't get the value). They way most db_xxx() functions are used in midas,
maybe they should be made "void" and "unfailible", with "STATUS
db_xxx_yes_I_can_fail_and_return_an_error_code()" evil twins. I guess this
is why "they" invented C/C++ exceptions. Anyway, something to think about.

Affected files:
src/lazylogger.c
src/odbedit.c
src/mlogger.c
src/mfe.c
src/odb.c
src/mana.c
src/midas.c
src/mhttpd.c

K.O.
    Reply  01 Nov 2003, Stefan Ritt, , more odb  
> I added error checking to the places where we read "/runinfo/run number". In
> general, I do this:

> Affected files:
> src/lazylogger.c
> src/odbedit.c
> src/mlogger.c
> src/mfe.c
> src/odb.c
> src/mana.c
> src/midas.c
> src/mhttpd.c

Now YOU broke the system by editing all these files with something I consider 
temporary debugging code. A run number of zero is *VALILD*. If I want to make 
sure a new experiment starts with run number #1, I put a run number of 0 into 
the ODB. So on the first start the number is incremented by one which results 
in run number from one. So please remove those checks which prevents me of 
doing that. Again, your "run number zero" problem is soemhow specific to your 
environment, and I would not put all these tests into the distribution, 
because this can have side effects, like that one I described above.

- Stefan
       Reply  01 Nov 2003, Konstantin Olchanski, , more odb  
> > I added error checking to the places where we read "/runinfo/run number". 
> Now YOU broke the system by editing all these files with something I consider 
> temporary debugging code. A run number of zero is *VALILD*.

I think I broke nothing. I do know that run number 0 is a valid odb value. Here
is an audit of all places where I abort on invalid run numbers:

mana.c: line 3676: assert(current_run_number > 0);
we take the run number from an event and write it into ODB. Events cannot have
run number negative or zero.

mana.c:analyze_run(): line 4632: assert(run_number > 0);
we are asked to analyze run "run_number". zero or negative is not valid.

midas.c:assert(run_number > old_run_number);
midas.c:assert(run_number > 1);
this code is not in CVS.

odbedit.c: line 2563: assert(old_run_number >= 0);
run number zero is valid

odbedit.c: line 2641: assert(new_run_number > 0);
starting a new run number zero is not valid

mfe.c: line 1786: if (run_number<=0) cm_msg(MERROR, "main", "aborting on attempt
to use invalid run number %d", run_number);
auto restart from run 0 to 1 is not valid

midas.c: line 3917: if (run_number<=0) cm_msg(MERROR, "cm_transition", "aborting
on attempt to use invalid run number %d",run_number);
transition to run zero or negative is not valid

midas.c: line 16101: if (run_number<0) cm_msg(MERROR, "el_submit", "aborting on
attempt to use invalid run number %d", run_number);
negative run numbers are not valid

mlogger.c: line 3301: if (run_number<=0) cm_msg(MERROR, "main", "aborting on
attempt to use invalid run number %d", run_number);
auto restart from run 0 to run 1 is not valid

K.O.
          Reply  14 Nov 2003, Stefan Ritt, , more odb  
Ok, I apologize. It's all ok. Thanks for clearifying. Concerning the assert's, it 
would be nice to be able to disable them in release code. Under Windows, the 
assert() is actually a macro which expands to zero if NDEBUG is defined. I 
believe it's the same under linux, but I don't know about VxWorks. So we have 
three options:

1) Keep asserts always. This might possible slow down a DAQ system, but I'm not 
sure how much. Might be negligible.

2) Disable asserts by default (standard make). Only the "experts" can enable it 
in the make file (by removing NDEBUG), since only they know what to do with the 
assertation messages.

3) Let the user decide on the standard installation. Maybe have two libraries, 
one debug, one no-debug. The no-debug can even have the compiler optimization 
disabled, which makes debugging easier.

So what is your opinion (comments from others are welcome as well) of which way 
to go? 
Entry  31 Oct 2003, Konstantin Olchanski, , Do not frob "/runinfo" in mhttpd.c 
I found where we tickle the race condition in db_create_record().

1) in mhttpd.c,  every time we show the status page, we call
db_create_record(hDB, 0, "/Runinfo", strcomb(runinfo_str));
2) internally db_create_record() deletes /RunInfo
3) other programs read "/runinfo/run number" while it is deleted do not
check for the db_get_value() error code and happily get a zero run number.

Stephan fixed the race condition, and now I commited an mhttpd.c change that
only calls db_create_record(hDB, 0, "/Runinfo", strcomb(runinfo_str)); if
/runinfo does not exist. This seems to be redundant with a similar call in
cm_connect_experiment1(), called each time a new client starts up.

Files changed:
src/mhttpd.c

K.O.
    Reply  01 Nov 2003, Stefan Ritt, , Do not frob  
> I found where we tickle the race condition in db_create_record().
> 
> 1) in mhttpd.c,  every time we show the status page, we call
> db_create_record(hDB, 0, "/Runinfo", strcomb(runinfo_str));
> 2) internally db_create_record() deletes /RunInfo
> 3) other programs read "/runinfo/run number" while it is deleted do not
> check for the db_get_value() error code and happily get a zero run number.
> 
> Stephan fixed the race condition, and now I commited an mhttpd.c change that
> only calls db_create_record(hDB, 0, "/Runinfo", strcomb(runinfo_str)); if
> /runinfo does not exist. This seems to be redundant with a similar call in
> cm_connect_experiment1(), called each time a new client starts up.

The reason for the db_create_record() is the following: Assume that we change 
the /runinfo structure, by adding an additional variable in the future. If we 
run a "new" mhttpd on an "old" experiment, the "runinfo" C structure does not 
match the ODB contents. The db_create_record() ensures that the ODB structure 
exactly matches the C structure. I agree with you that this can cause 
potential problems. But most of them should be fixed by the additional lock() 
I added recently. So other programs cannot read the run number while it is 
deleted.

One could think of checking the record size, and re-creating the runinfo if 
the ODB record size does not match the C record size. But this does not 
prevent the potential error that some variable are reversed in order. They 
are then mapped wrongly to the C runinfo structure.

I see that you work very hard now on all possible checks for the run number. 
But I would not commit that and make it part of the distribution, since all 
experiments at PSI for example do not have this run number problem. Run it 
locally, determine the cause of your problem (the discovery of the race 
condition was already very good, I'm glad that your found it, should make the 
system much more stable), and we'll fix it. Puttin ASSERT's is a good idea, I 
should have done it from the very beginning. But if you start now, please put 
it in all other 100000 places (;-)

I would not add a db_get_value_cannot_possibly_fail() into the standard 
distribution, because it probably cannot correct the initial problem and then 
just will go into an infinite loop. We should tackle problems always at their 
source. 

If you cannot resolve your zero run number problem, do the following: There 
is a cm_msg(MDEBUG, ...) which only puts a message into the shared memory, 
but not in midas.log. This can be used for real time debugging. Add those 
message temporarily in db_get_value() etc. to see what is going on. As soon 
as the run number goes to zero, stop all processes immediately (for example 
by locking the database with db_lock_database), and the look backwards in the 
sysmsg buffer to see what happened *before* the run number went to zero.

- Stefan
       Reply  01 Nov 2003, Konstantin Olchanski, , Do not frob  
> > I found where we tickle the race condition in db_create_record().
> The reason for the db_create_record() is the following: Assume that we change 
> the /runinfo structure...

I think there is a deep fundamental problem with changing data structures "on the
fly". Calling db_create_record("/runinfo") at every show_status_page() does not
fix it.

If I change the runinfo structure, rebuild, relink and restart "mhttpd", the
db_create_record("/runinfo") from cm_connect_experiment() will update the runinfo
structure in ODB. In this case, the call from show_status_page() is redundant. As
a side effect, when we do this, we break every running ODB client- they still
have the old runinfo layout. Not good...

If I change the runinfo structure, rebuild, relink and restart all applications,
*except* for mhttpd, "/runinfo" in ODB will be updated when the first updated
client connects to ODB via the db_create_record("/runinfo") from
cm_connect_experiment(). Then, the old mhttpd will restore the old layout via the 
db_create_record("/runinfo") in show_status_page(), breaking everything. Not good...

If I change the runinfo structure, rebuild, relink and restart everything,
"/runinfo" in ODB will be updated when the first client connects to ODB via the
db_create_record("/runinfo") from cm_connect_experiment(). In this case, the call
from show_status_page() is redundant. This is the only corruption-free scenario.

This lack of integrity enforcement vs version skew in binary data structures is,
I think, an ODB design error. Perhaps, ODB applications should be prohibited from
 direct access to ODB "C" data structures: we cannot ensure that the data layout
in the application and in ODB are the same.

> One could think of checking the record size, and re-creating the runinfo if 
> the ODB record size does not match the C record size. But this does not 
> prevent the potential error that some variable are reversed in order. They 
> are then mapped wrongly to the C runinfo structure.

Exacto.

> I see that you work very hard now on all possible checks for the run number. 
> But I would not commit that and make it part of the distribution...

This is a philosophical issue.

My checks are in line with the "design by contract" school of programming. In a
nutshell, this ideology requires that before I do anything, I should enforce the
validity of my inputs and after I am done, I should enforce the validity of my
outputs. In practice, this translates into liberal use of assert()'s *in
production code*.

To ensure that old bugs stay fixed, and that new bugs are promptly discovered, it
is essential that the "contract checks" stay in the production code forever.

But let better writers argue programming philosophy in the literature.

Personally, when hunting down bugs in unstable code, I find this technique to be
vastly superior to the more common appoach of "This program has no bugs. Error
checking and assert()s are wasteful. Let's close our eyes and hope no bad things
happen to us (again)".

> But if you start now, please put [asserts] in all other 100000 places (;-)

I know that no good deed goes unpunished, but pewleeze!!!

> If you cannot resolve your zero run number problem, do the following: ...
> [lock ODB, freeze the experiment, look at log files]

This technique is obsolete. Today, we instrument the code with sanity checks
and validity tests. Then all the bugs find themselves with minimal manual
intervention.

K.O.
ELOG V3.1.4-2e1708b5