Back Midas Rome Roody Rootana
  Midas DAQ System, Page 42 of 45  Not logged in ELOG logo
Entry  04 Nov 2004, Jan Wouters, Forum, Frontend code and the ODB 
I would like to know whether all parameters used by the frontend code have to be in the "Experiment/
Run Parameters" section.  This section can become big and difficult to maintain, because it is one single 
big section of experim.h (EXP_PARAM_DEFINED).  I have parameters the various frontends read at the 
beginning of each run, which set the hardware settings of various devices.  I would like to place these in 
a section all their own, organized by device.  Is this doable? 
    Reply  04 Nov 2004, Stefan Ritt, Forum, Frontend code and the ODB 
Hi Jan,

I usually keep under /Experiment/Run Parameters only those settings which are kind of "global" and thus of
interest to frontend *and* analyzer, like a run mode (data/calibration/cosmic/...). Settings more specific to a
frontend I keep under /Equipment/<name>/Settings where <name> is the equipment name the specific frontend
produces. In your case each frontend will then get its own tree (related to each fragment). Please note that
both discussed trees can contain a whole tree with subdirectories, which lets you organize your data better.

Best regards, Stefan.
Entry  02 Nov 2004, Renee Poutissou, Info, Event Builder info in mhttpd Status page 
Information about the Event Builder statistics has been removed from the 
Status page in mhttpd.  I heard from Pierre that this information might 
be redundant when using the new Event Builder format??? 
For the TWIST experiment, we are running and cannot change on the fly
to a new format Event Builder.  It is very important for us to show the users
the rates and statistics coming out of the EventBuilder.  I had  to put this
piece of code back in mhttpd.  
Can I put it back in the distribution? or do I have to put a special TWIST flag? 
or do I have to keep reinserting this every time there is an update to mhttpd.c? 
At the moment, TWIST is generating a couple of updates/week to mhttpd.c
Entry  22 Oct 2004, Konstantin Olchanski, Bug Fix, mhttpd message colouring 
I commited a fix to mhttpd logic that decides which messages should be shown in
"red" colour- before, any message with square brackets and colons would be
highlighted in red. Now only messages matching the pattern [...:...] are
highlighted. The decision logic was moved into a function message_red(). K.O.
Entry  13 Oct 2004, Konstantin Olchanski, Bug Report, TWIST upgrade bombed... 
The upgrade of TWIST to the latest midas has bombed- we see mevb and mlogger
crashes during shared memory data buffer accesses. I am looking into it and I
will add information as I figure things out. K.O.
    Reply  13 Oct 2004, Pierre-Andre Amaudruz, Bug Report, TWIST upgrade bombed... 
> The upgrade of TWIST to the latest midas has bombed- we see mevb and mlogger
> crashes during shared memory data buffer accesses. I am looking into it and I
> will add information as I figure things out. K.O.

Since 1.9.5 the EventBuilder has been modified. Please consult the documentation
where the new mevb scheme is explained.
Test of the mevb with up to 16 frontends (15 different CPUs) has been tested
successfully. Data rate at the EventBuilder were measured about 50MB/s without the
logger and ~30MB/s with the logger.
       Reply  13 Oct 2004, Konstantin Olchanski, Bug Report, TWIST upgrade bombed... 
> > The upgrade of TWIST to the latest midas has bombed- we see mevb and mlogger
> > crashes during shared memory data buffer accesses. I am looking into it and I
> > will add information as I figure things out. K.O.
> 
> Since 1.9.5 the EventBuilder has been modified. Please consult the documentation
> where the new mevb scheme is explained.
> Test of the mevb with up to 16 frontends (15 different CPUs) has been tested
> successfully. Data rate at the EventBuilder were measured about 50MB/s without the
> logger and ~30MB/s with the logger.

It turns out that TWIST uses a private mevb.c. We will consider upgrading to the
standard one.

K.O.
    Reply  13 Oct 2004, Konstantin Olchanski, Bug Report, TWIST upgrade bombed... 
> The upgrade of TWIST to the latest midas has bombed- we see mevb and mlogger
> crashes during shared memory data buffer accesses. I am looking into it and I
> will add information as I figure things out. K.O.

I traced buffer memory corruption to a logic error in system.c::ss_shm_open(). If
a .SHM file exists, it's size is used as the size of the sysv shared memory
segment, even if the requested shared memory size is bigger, but the caller of
ss_shm_open()  thinks it got all the requested memory. Eventually we try to use
the unallocated memory and crash. This is the proposed fix and I will commit it
after I retest the upgrade during the next few days.

[olchansk@send src]$ cvs diff -u system.c
olchansk@midas.psi.ch's password: 
Index: system.c
===================================================================
RCS file: /usr/local/cvsroot/midas/src/system.c,v
retrieving revision 1.83
diff -u -r1.83 system.c
--- system.c    4 Oct 2004 07:04:01 -0000       1.83
+++ system.c    14 Oct 2004 05:51:16 -0000
@@ -544,8 +544,14 @@
       } else {
          /* if file exists, retrieve its size */
          file_size = (INT) ss_file_size(file_name);
-         if (file_size > 0)
+         if (file_size > 0) {
+            if (file_size < size) {
+               cm_msg(MERROR, "ss_shm_open", "Shared memory segment \'%s\' size
%d is smaller than requested size %d. Please remove it and try
again",file_name,file_size,size);
+               return SS_NO_MEMORY;
+            }
+            
             size = file_size;
+         }
       }
 
       /* get the shared memory, create if not existing */

K.O.
       Reply  14 Oct 2004, Stefan Ritt, Bug Report, TWIST upgrade bombed... 
Agree.

Once you did the modification, please check following situation: Create a fresh
ODB withe increased size ("odbedit -s 2000000" for example). Then check that the
other clients "adopt" this increased size. Note that some experiments need a
bigger ODB, and I don't want to have them recompile all clients, that's why the
code in ss_shm_open() can attach to a *larger* shared memory. However, it should
not matter to the process, since the ODB (or SYSTEM) shared memory size is
stored in the pheader->key_size and pheader->data_size of each participating
process. So they should never write beyond the limits defined in that header.
The size to ss_shm_open() is only a "hint" if the shared memory does not exist,
and is nowhere later used in the code.
    Reply  14 Oct 2004, Konstantin Olchanski, Bug Report, TWIST upgrade bombed... 
> The upgrade of TWIST to the latest midas has bombed- we see mevb and mlogger
> crashes during shared memory data buffer accesses. I am looking into it and I
> will add information as I figure things out. K.O.

On second try, it looks like we are in business- the first try did not work
because of two mistakes:

1) I did not delete *all* old .SHM files (.ODB.SHM, .SYSTEM.SHM, .YBUF1.SHM,
.YBUF2.SHM). I deleted ODB.SHM, so odb worked, but forgot about the data buffers
SYSTEM.SHM & co and ended up with segmentation faults and core dumps in the buffer
management code caused by a mismatch of the old-midas buffers and new-midas code.
2) while debugging these core dumps, I made an error in my test code, so even
after I deleted the old data buffers, things still did not work. Talk about
over-debugging a problem...

K.O.
Entry  13 Oct 2004, Konstantin Olchanski, Suggestion, No al_clear_alarm()? 
We have al_trigger_alarm(), but no matching al_clear_alarm(), and I need it to
clear my alarm once the alarm condition no longer exists. Any objections if I
add this function? K.O.
    Reply  13 Oct 2004, Stefan Ritt, Suggestion, No al_clear_alarm()? 
> We have al_trigger_alarm(), but no matching al_clear_alarm(), and I need it to
> clear my alarm once the alarm condition no longer exists. Any objections if I
> add this function? K.O.

The idea is that once an alarm got triggered, it stays until the user
acknowledged, even if the alarm condition has been disappeared. Through mhttpd,
the user can press the "Reset" button, which then executes al_reset_alarm().
However, it is possible to call al_reset_alarm() directly from user code to
achieve the same thing.
       Reply  13 Oct 2004, Konstantin Olchanski, Suggestion, No al_clear_alarm()? 
> > We have al_trigger_alarm(), but no matching al_clear_alarm(), and I need it to
> > clear my alarm once the alarm condition no longer exists. Any objections if I
> > add this function? K.O.
> 
> call al_reset_alarm()

Thanks. I must be quite blind as I did not see al_reset_alarm() in midas.h. I se eit
now. Thanks.

K.O.
Entry  13 Oct 2004, Konstantin Olchanski, Bug Report, silly odbedit "rename Display xxx/yyy" 
odbedit command "rename Display xxx/yyy" creates a key named "xxx/yyy" (yes,
with a slash in the name) and this key cannot be deleted or renamed...
K.O.
    Reply  13 Oct 2004, Stefan Ritt, Bug Report, silly odbedit "rename Display xxx/yyy" 
> odbedit command "rename Display xxx/yyy" creates a key named "xxx/yyy" (yes,
> with a slash in the name) and this key cannot be deleted or renamed...
> K.O.

"rename" is "rename", not "mv" under Unix. If you want this functionality, put it
in and don't complain!
Entry  13 Oct 2004, Konstantin Olchanski, Bug Report, db_paste: found string exceeding MAX_STRING_LENGTH 
I am updating TWIST to the latest MIDAS and when I load a saved .odb file, I get
these messages. Their text ought to say where and what strings it does not like.
K.O.



[twistonl@midtwist ~/online]$ odbedit
Please define environment variable 'MIDASSYS'
pointing to the midas installation directory.
[local:twist:S]/>load /twist/data_onl/current/run17548.odb
[odb.c:5600:db_paste] found string exceeding MAX_STRING_LENGTH
[odb.c:5600:db_paste] found string exceeding MAX_STRING_LENGTH
[odb.c:5600:db_paste] found string exceeding MAX_STRING_LENGTH
    Reply  13 Oct 2004, Stefan Ritt, Bug Report, db_paste: found string exceeding MAX_STRING_LENGTH 
Can you attach 

/twist/data_onl/current/run17548.odb

so I can reproduce the problem?
Entry  29 Sep 2004, Stefan Ritt, Info, Increased number of clients in midas.h, important! 
Due to some request several limitations like the maximal number of clients to the ODB have 
been increased in midas.h and committed to CVS. It is important to note that clients compiled
with the old limits cannot coexist with clients compiled with the new limits. You will get
ODB corruption notifications and everything will crash, and you wonder where this comes from.

So once you CVS update midas.h, revision 1.139, please make sure to recompile *ALL* your
midas applications with the new midas.h.

Stefan
    Reply  03 Oct 2004, Konstantin Olchanski, Info, Increased number of clients in midas.h, important! 
> It is important to note that clients compiled
> with the old limits cannot coexist with clients compiled with the new limits. You will get
> ODB corruption notifications and everything will crash, and you wonder where this comes from.
> 
> So once you CVS update midas.h, revision 1.139, please make sure to recompile *ALL* your
> midas applications with the new midas.h.

Stefan, to avoid confusion from crashes caused by incompatible ODBs would it be possible to add a "version number" to ODB, together with a check and an error message 
saying "oops... incompatible ODB, please rebuild your programs"? We tend to have different versions of midas floating around and users have old executables stashed away, 
and all this makes it rather difficult to manually keep track on what ODB is compatible with what midas.

K.O.
       Reply  03 Oct 2004, Stefan Ritt, Info, Increased number of clients in midas.h, important! 
> Stefan, to avoid confusion from crashes caused by incompatible ODBs would it be possible to add a "version number" to ODB,
together with a check and an error message 
> saying "oops... incompatible ODB, please rebuild your programs"? We tend to have different versions of midas floating around and
users have old executables stashed away, 
> and all this makes it rather difficult to manually keep track on what ODB is compatible with what midas.

I fully agree that a version number in ODB is a good thing, and I certainly will put one there, but this won't help for old
applications. If I add new code which checks in cm_connect_experiment() if the version number matches, this will only help for new
applications connecting to old ODBs. If old applications (prior to invention of the version number) connect to a new ODB, they still
will crash.

However, we are planning to make a new release 1.9.5 soon (next week), so can can people tell not to "mix" 1.9.5 with pre-1.9.5
programs.
          Reply  03 Oct 2004, Konstantin Olchanski, Info, Increased number of clients in midas.h, important! 
> However, we are planning to make a new release 1.9.5 soon (next week), so can can people tell not to "mix" 1.9.5 with pre-1.9.5 programs.

Right. We cannot fix the past, but we should fix the future. BTW, "do not mix versions" is hard to enforce and mismatches did, do and
will happen. For one thing, looking at a given midas-using executable, how do I tell what version of midas it has inside?

K.O.
             Reply  04 Oct 2004, Stefan Ritt, Info, Increased number of clients in midas.h, important! 
> Right. We cannot fix the past, but we should fix the future. BTW, "do not mix versions" is hard to enforce and mismatches did, do and
> will happen

For remote connections (through mserver), there is already a version check. If the minor version differs, you get a warning, if the major
versions differ (1.>>9<<.4), the client won't start. So at least for remote connection you get a clue.

> For one thing, looking at a given midas-using executable, how do I tell what version of midas it has inside?

Ther is a function cm_get_version() returning the version. As for the executable, all you can do is a

strings <executable> | grep 1.9
                Reply  08 Oct 2004, chris pearson, Info, Increased number of clients in midas.h, important! 
> > For one thing, looking at a given midas-using executable, how do I tell what version of midas it has inside?
> 
> Ther is a function cm_get_version() returning the version. As for the executable, all you can do is a
> 
> strings <executable> | grep 1.9

   A lot of programs have a commandline option, such as "--version", where they return the program version number then exit.  As well as the
program version number, the version number of the midas library it's linked with could also be returned (There can be more than one
libmidas.so on a system and this would show which one was currently being linked)

   Something I would find useful would be for the version number to identify precisely which version you have, i.e. not to have different
versions of midas given the same version number.  I've had problems earlier this year due to midas-1.9.3 changing several times between
January and July, while keeping the same number.  I think if "in-between" versions of midas are to be made available, they should contain a
revision number or date or something in the version number to identify them.

> For remote connections (through mserver), there is already a version check. If the minor version differs, you get a warning, if the major
> versions differ (1.>>9<<.4), the client won't start. So at least for remote connection you get a clue.

   Safety measures like these can sometimes get in the way, if you know what you're doing.  So unless there is absolutely no possibility of
success, I think checks such as this one should be overrideable (by a client option).

Chris
                   Reply  08 Oct 2004, Konstantin Olchanski, Info, Increased number of clients in midas.h, important! 
>    A lot of programs have a commandline option, such as "--version", where they return the program version number then exit.  As well as the
> program version number, the version number of the midas library it's linked with could also be returned (There can be more than one
> libmidas.so on a system and this would show which one was currently being linked)

This would solve the versioning problem for midas built from versionned tarballs, and I am considering a similar scheme for midas installed from
RPMs. But what do I do for midas built from CVS?!? K.O.
Entry  03 Oct 2004, Konstantin Olchanski, Info, mscb usb support for macosx 
After a felicitous confuence of stellar bodies (Stefan, myself, some mscb hardware
and a mac laptop all in the same room for a few days), I wrote some MacOSX
code to support the MSCB-USB dongle using the native IoKit USB API. During testing,
I was able to communicate with an MSCB High voltage regulator module. I am now
commiting this code to CVS, warts and all (we can clean it up when somebody actually
uses it). Tested compilation on Linux (with libusb) and MacOSX (native
IoKit. MacOSX+libusb is possible but untested), Win32 should be unaffected by my changes,
but I could not test it.
K.O.
Entry  03 Oct 2004, Stefan Ritt, Info, Introduction of new transition scheme 
A new transition scheme has been implemented and committed. Previously, one had the
possibility to register for PRE/POST transitions, which was necessary in order to first
stop the frontends, then stop the logger to close the data file. While this scheme
long time has proven to be successful, it was now concluded that three levels
(PRESTROP/STOP/POSTSTOP for example) are not suffucient in some cases. Therefore,
a true sequence-based scheme has been introduced, implemented and committed.

The PRE/POST transition have been removed and an extra parameter "sequence_number"
has been added to cm_register_transition. If clients register with different
sequence numbers, their RPC transition function is executed according to their
sequnce number, smaller numbers being executed prior to larger numbers.

The frontends register at sequence number 500 for example, while the logger
registers with 200 for start and 800 for stop, making sure it's called after the
frontend(s) when stopping a run. The default numbers can be changed from within
the user code with the new function cm_set_transition_sequence(). This way, it is
for example possible to have all frontends being called in a certain sequence
when starting and stopping runs.

The modification will (hopefully) not have any influence of existing experiemnts,
as long as they don't call cm_register_transition directly. If so however, one has
to add the additional parameter to this function.
Entry  28 Sep 2004, Piotr Zolnierczuk, Forum, MIDAS/MVME167/Linux 
Hi,
 has anyone tried runnning midas frontend on a Linux running 
on a Motorola MVME167 motorola embedded CPU?
I have seen people running Linux on a MV167 
(http://www.sleepie.demon.co.uk/linuxvme/)
so in principle this can be done.

The reason I am asking is that we have a lot of them in house 
and we would like to avoid paying for VxWorks
(I have succesfully run Midas on a mvme167/VxWorks node)

Or maybe one has come up with a much better solution 
[short of dumping mv167 into a sewer :)]

Piotr
Entry  21 Sep 2004, Konstantin Olchanski, , ODB-EPICS gateway 
At TRIUMF, we use several different versions of code to interface MIDAS and
EPICS (http://www.aps.anl.gov/epics). Now that we more or less understand
our needs, I propose this design for a simplified "EPICS" MIDAS frontend. I
would like to keep this new front end in the MIDAS CVS repository, possibly
replacing the existing EPICS frontend in examples/epics.

The basic idea is to provide an ODB-driven bi-directional gateway between
EPICS and ODB with this functionality: periodically read EPICS data and save
it in ODB, optionally generate MIDAS events with EPICS data; for writing
data to EPICS, use hotlinks- if the user changes "write" variables in ODB,
the changes are sent to EPICS.

1) ODB structure
   /equipment/epicsgw/
     common/...
     statistics/...
     variables/
       epics[...] <--- EPICS->ODB data (double[])
       write[...] <--- ODB->EPICS data (double[])
     settings/
       num epics  <--- number of epics variables (int)
       num write  <--- number of write variables (int)
       names epics <--- human-readable names for EPICS variables (string[])
       names write <--- human-readable names for EPICS variables (string[])
       chans epics <--- EPICS channels for epics-read data (string[])
       chans write <--- EPICS channels for epics-write data (string[])
       period      <--- EPICS read period in milliseconds (int[])
       enable epics <-- enable (y/n) epics-read (bool[])
       enable write <-- enable (y/n) epics-write (bool[])
       enable events <- enable event generation (bool)

2) EPICS to ODB data path: periodically read each enabled "epics" variable
and write the data values to ODB. Other front ends can hotlink the "epics"
variables to receive updated epics data.

3) ODB to EPICS data path: monitor the hotlink to ".../variables/write". If
data changes, send the changes to EPICS. At startup, write all "write"
variables to EPICS.

4) event generation: TBD.

5) error handling: TBD.

K.O.
    Reply  21 Sep 2004, Stefan Ritt, , ODB-EPICS gateway 
The easiest way to achieve this is to write a new class driver, probably derived
from the multi.c class driver. One has just to rename all "output" with "write"
(or better "ODB2EPICS") and all "input" with "EPICS2ODB". The multi class driver
handles already a factor/offset for each channel (which could be 1/0 of course),
a threshold to update the ODB/EPICS only when a value changes significantly, to
retrieve labes from the bus driver (EPICS labes -> ODB settings), automatic
event generation and error handling. So it would be a good starting point.

What one gets from the class driver in the ODB is:

  /equipment/<name>/
     variables/
        Input[]     <--- read from the bus driver (float)
        Output[]    <--- witten to the bus driver (float)
     settings/
        Names Input[]        <--- human readable names
        Names Output[]       <--- human readable names
        Update Threshold[]
        Input Offset[]
        Input Factor[]
        Output Offset[]
        Output Factor[]
        Devices/
           Input/
              DD/   <--- parameters for Device Driver
                 ... Epics addresses, flags etc.
              BD/   <--- parameters for Bus Driver
           Output/
        
So if one uses the standard mfe.c code together with the multi.c class driver
and epics_ca.c device driver all what is left is the following:

- replace cd_gen.c by multi.c in the examples/epics directory
- break down the already existing flags into enable epics/write/events
- maybe add th EPICS read period

The last two things should be done in the epics_ca.c device driver, so one can
use the multi.c class driver without any change. Event generation and error
handling then comes for free.
Entry  31 Aug 2004, Konstantin Olchanski, , midas odb locking 
One of our experiments is suffering from periodic ODB corruption and I
suspected that there might be a problem with ODB locking. In the last few
days, I finally had time to read the ODB locking code, to write a little
test program and to play with ODB. This is what I found:

1) ODB locking appears to be sound, my test program failed to find any
locking flaws, except for a big problem, described below. Please read on.

2) ODB locking is "unfair". A program "while (1) { lock(); do_stuff();
unlock(); /* no sleep here */ }" would lock out other users of ODB
(including odbedit) for seconds and minutes at a time. I see this as a flaw
in the semop() implementation in the Linux kernel and I cannot think of an
easy way to fix it in our code. (I tested only on RHL9 2.4.20-31.9smp on a
dual CPU machine. 2.6 kernels may work better).

3) presently, we use an infinite timeout waiting for the ODB lock. I suggest
we set the timeout to, say, 5 minutes, to protect against dead (or live)
locks that we saw a few times here at TRIUMF- every ODB client would hang
without any error messages or explanations forever waiting for the ODB lock
that is held by some rogue ODB client stuck in an infinite loop in corrupted
ODB.

4) while reading the locking code in db_{lock,unlock}_database(), I thought
that there is a race condition against the "lock_cnt" variable, until I
realized that this variable is local and there is no race condition. I would
like to comment this in the code?

5) I found a failure mode where db_close_database() erroneously deletes the
lock semaphore. Once the semaphore is deleted, ODB locking silently fails
(in db_lock_database() we do not check for success status of
mutex_wait_for()) and remaining ODB clients operate without locking protection.

This failure happens after ODB undercounts active clients after losing track
of clients removed by "idle timeouts" (and by other checks?). At some point,
db_close_database() decides that there are no more clients left, attempts to
delete the shared memory (this fails because there are still active clients
attached) and deletes the lock semaphore. Afterwards, the remaining "lost"
active clients operate without lock protection. This would tend to happen
while shutting down all clients, a time when they all rush-in to delete
themselves from "/system/clients", unsuring ODB corruption.

A quick solution I just coded would not work for mmap()-based shared memory
(I destroy the lock semaphore after the ODB shared memory is destroyed) as
this relies on "shm_nattch" counting feature of System-V shared memories,
absent in the mmap() based shared memories. Since the Windows implementation
uses mmap(), my "solution" is an obvious no-go. Alternatives would be to add
a second semaphore, just for counting active ODB clients (kluncky); or never
delete the semaphore in the first place (dirty, and how does one clear it if
it gets stuck in the locked state?).

For now, I would like to add a check to ss_mutex_waitfor() call in
db_lock_database() and crash if we can't get the mutex. Returning an error
code would be cleaner, but would not work because nobody checks the return
status of db_lock_database(). If can't get the mutex for (say) 5 minutes, I
think we should crash, too- something is very wrong and it is pointless to
continue waiting.

K.O.
    Reply  15 Sep 2004, Konstantin Olchanski, , midas odb locking 
After some discussion with Stefan-

> 1) ODB locking appears to be sound...
> 2) ODB locking is "unfair"

Stefan reminded me that "priority boosting" is the standard solution for this
problem. Since Linux does not appear to implement this, we may try doing it inside
midas, time permitting. "Fairness" behaviour of Win32, BSD and MacOSX may be worth
investigating.

> 3) presently, we use an infinite timeout waiting for the ODB lock.

I will add a timeout of 10 minutes, then shutdown the ODB client with an error message.

> 4) in db_{lock,unlock}_database(), [there is no] race condition against the
"lock_cnt" variable [because it is local].

I will document this.

> 5) I found a failure mode where db_close_database() erroneously deletes the
> lock semaphore. Once the semaphore is deleted, ODB locking silently fails
> (in db_lock_database() we do not check for success status of
> mutex_wait_for()) and remaining ODB clients operate without locking protection.

I will add a check and shutdown the ODB client with an error message if the lock
cannot be obtained (the mutex was deleted, the "lock" system call returns an error,
etc).

> [how to decide when the last ODB client disconnected from the shared memory and
when to delete the lock semaphore?]

We considered using a counting semaphore to count active ODB clients, if counting
semaphores do the right things on all supported systems (Linux, Win32, MacOSX).

K.O.
       Reply  16 Sep 2004, Stefan Ritt, , midas odb locking 
> I will add a timeout of 10 minutes, then shutdown the ODB client with an error message.

I added a timeout handling to db_lock_database. It was already present in
ss_mutex_wait_for, so it was just a matter of passing the status up the calling stack.
ODBEdit stops if it cannot obtain a lock after 5 minutes.
Entry  31 Aug 2004, Konstantin Olchanski, , mlogger crash if using mserver. 
Our users keep making a simple mistake- they set MIDAS_SERVER_HOST in their
environement. Most midas programs do not mind this- they go through the
mserver, inefficient but benign- except for the mlogger, which dumps core
about 10 seconds after starting. This mightily confuses the users-
everything works perfectly, except for the mlogger, (for most users) the
most obscure and magical part of midas. Obviously they can't take data
without the mlogger and they fail to correlate this crash with editing their
.cshrc file, so we get panic calls at midnight or whenever. And every time,
while debugging midas malfunctions, changes to .cshrc is absolutely the last
place we look for. Ouch!

As it turns out, mlogger does crash if it uses the mserver-
log_system_history() calls db_lock_database(), with a prompt crash because
the mlogger is not directly connected to any ODB (it's mserver is).

Obviously, running the mlogger via the mserver makes no sense, but we should
warn about this rather than dump core.

I propose this patch to src/mlogger.c::log_system_history():

-   db_lock_database(hDB);
-   db_notify_clients(hDB, hist_log[index].hKeyVar, FALSE);
-   db_unlock_database(hDB);
-
+   if (!rpc_is_remote())
+     {
+       db_lock_database(hDB);
+       db_notify_clients(hDB, hist_log[index].hKeyVar, FALSE);
+       db_unlock_database(hDB);
+     }
+   else
+     {
+       cm_msg(MERROR, "log_system_history", "Warning: mlogger is running
remotely via the mserver. This is an unsupported configuration. Please unset
MIDAS_SERVER_HOST and restart the mlogger");
+     }

K.O.
    Reply  07 Sep 2004, Stefan Ritt, , mlogger crash if using mserver. 
I trapped myself into that problem recently so it's the right time to fix it (;-).

We have two options: 

a) Make the logger work remotely, even if it's suboptimal and 
b) Make the logger refuse to run remotely. 

I have no case where I need to run the logger remotely, so I would opt for b).
This would mean removing the "-h" command line switch and the evaluation of
MIDAS_SERVER_HOST, or just supplying an empty host string to
cm_connect_experiment().

Let me know if you agree, I can then remove the "-h" option. The patch you
suggested I would apply in addition.

- Stefan
       Reply  15 Sep 2004, Konstantin Olchanski, , mlogger crash if using mserver. 
> I trapped myself into that problem recently so it's the right time to fix it (;-).
> We have two options: 
> a) Make the logger work remotely, even if it's suboptimal and 
> b) Make the logger refuse to run remotely.

After some discussion between Stefan, Pierre and myself, it was decided to disallow
running mlogger remotely via the mserver.

K.O.
Entry  09 Jul 2004, Stefan Ritt, , Introduction of environment variable MIDASSYS 
Starting from midas version 1.9.4 on, the environment variable 'MIDASSYS'
should be defined and point to the installation directory of midas. The
purpose of that is that add-on packages (like the upcoming ROME system) can
find the midas libraries and include files. It is excatly the same as for
ROOT which defines ROOTSYS and should therefore be straight forward. The
libraries should then reside in $MIDASSYS/lib (or %MIDASSYS%\lib under windows).

To remind users about this new variable, a test has been added to odbedit,
which shows a warning when starting odbedit and MIDASSYS is not defined.
    Reply  09 Jul 2004, Piotr Zolnierczuk, , Introduction of environment variable MIDASSYS 
> Starting from midas version 1.9.4 on, the environment variable 'MIDASSYS'
> should be defined and point to the installation directory of midas. The
> purpose of that is that add-on packages (like the upcoming ROME system) can
> find the midas libraries and include files. It is excatly the same as for
> ROOT which defines ROOTSYS and should therefore be straight forward. The
> libraries should then reside in $MIDASSYS/lib (or %MIDASSYS%\lib under windows).
> 
> To remind users about this new variable, a test has been added to odbedit,
> which shows a warning when starting odbedit and MIDASSYS is not defined.

1. Finally! It's about time to do that! 

2. What will the entire structure tree look like?

Here's my suggestion
MIDASSYS=/opt/midas-1.9.4 (for example)   


so the Linux binaries would go to 
MIDASHOST=i386-pc-linux-gnu
$MIDASSYS/$MIDASHOST/bin
$MIDASSYS/$MIDASHOST/lib

the VxWorks binaries
MIDASHOST=m68k-wrs-vxworks
$MIDASSYS/$MIDASHOST/bin
$MIDASSYS/$MIDASHOST/lib

and the shared stuff would go to 
$MIDASSYS/include
$MIDASSYS/share/drivers
$MIDASSYS/share/examples

The Makefile would need to be adjusted (for make install) but that is not
too complicated

What do you think?

Regards
  Piotr
       Reply  09 Jul 2004, Stefan Ritt, , Introduction of environment variable MIDASSYS 
> Here's my suggestion
> MIDASSYS=/opt/midas-1.9.4 (for example)   

I guess we should follow the "standard" as much as possible. MIDASSYS was inspired by
ROOTSYS. Now where do people usually install ROOT? Is it /opt/root-x.x.x or something
else. Some years ago (when I did the last time some linux administration) optional
packages were put into /usr/local by default. I guess you have more experience with
today's tradition, so do whatever you thing is standard.

> so the Linux binaries would go to 
> MIDASHOST=i386-pc-linux-gnu
> $MIDASSYS/$MIDASHOST/bin
> $MIDASSYS/$MIDASHOST/lib

Does that mean that the path has to be modified to include $MIDASSYS/$MIDASHOST/bin?
If we put a link to /usr/local/bin, the path does not have to be modified. What about
shared libraries? Does ldconfig know about /usr/local/lib, or $MIDASYS/$MIDASHOST/lib?

> and the shared stuff would go to 
> $MIDASSYS/include
> $MIDASSYS/share/drivers
> $MIDASSYS/share/examples

What about /usr/share? Is that a common place for documentatino etc?

Thanks for your advice.

- Stefan
          Reply  09 Jul 2004, Piotr Zolnierczuk, , Introduction of environment variable MIDASSYS 
> I guess we should follow the "standard" as much as possible. MIDASSYS was inspired by
> ROOTSYS. Now where do people usually install ROOT? Is it /opt/root-x.x.x or something
> else. Some years ago (when I did the last time some linux administration) optional
> packages were put into /usr/local by default. I guess you have more experience with
> today's tradition, so do whatever you thing is standard.
I agree that we should follow the standard. 
I used /opt as an example. 
There are several "schools" as to where put things my philosophy is
/usr/{bin,lib,include}       - std OS packages (RPMS, .deb or whatever your flavor likes)
/usr/local/{bin,lib,include} - make/make install packages
/opt/..                      - additional packages (RPMS, ...) 

But it should be up to the user what $MIDASSYS she/he likes.

> 
> > so the Linux binaries would go to 
> > MIDASHOST=i386-pc-linux-gnu
> > $MIDASSYS/$MIDASHOST/bin
> > $MIDASSYS/$MIDASHOST/lib
> 
> Does that mean that the path has to be modified to include $MIDASSYS/$MIDASHOST/bin?
> If we put a link to /usr/local/bin, the path does not have to be modified. What about
> shared libraries? Does ldconfig know about /usr/local/lib, or $MIDASYS/$MIDASHOST/lib?
The path could/should be modified in users .bashrc/.tcshrc or we could provide a simple
system-wide script(s) that would do the job.
For years, I've been using such a scenario on my Linux PCs with regards to various
add-on packages (e.g. cern). 

Here's an example of my cern.sh that goes into /etc/profile.d on my RedHat Linux PC
#===================================
. /etc/profile.d/.functions
export CERN=/cern
export CERN_LEVEL=pro
addpath $CERN/$CERN_LEVEL/bin
#===================================

As for library path: there are several ways (as with exec path)
a) nice way: modify /etc/ld.so.conf by adding $MIDASYS/$MIDASHOST/lib
b) modifying LD_LIBRARY_PATH (there's some security issues with it)
c) symlinking to /usr/local/lib


> 
> What about /usr/share? Is that a common place for documentatino etc?
Yes. Check any recent Linux distribution /usr/share is full of docs, icons, etc.

This is my bias. 

I (obviously) prefer packing things into rpm which makes install/updates 
very easy - especially if you are managing several machines.

Cheers
    Piotr
          Reply  09 Jul 2004, John M O'Donnell, , Introduction of environment variable MIDASSYS 
For a long time the "de facto" standard was to spread a package around in many
directories under /usr/local.  This proved to be a bad idea, as removing the
package
became very difficult.

With POSIX there is a written standard, which says that each pacakge goes in
it's own
directory under /opt. eg. /opt/midas.  Each package gets to define it's own
structure
within that directory.  One could imagine several versions installed at the
same time
/opt/midas/v1.9.2 and /opt/midas/v1.9.4 each with a bin, lib include etc. 
Following the
ROOT example, you could make a link from /opt/midas/pro to
/opt/midas/v1.9.4, so that
system files and login files are easy to maintain etc.  The basic idea is

MIDASSYS=/opt/midas/pro
PATH=$PATH:$MIDASSYS/bin

though a more sophisticated approach is

MIDASSYS=/opt/midas/pro
echo $PATH | grep -q $MIDASSYS || PATH=$PATH:$MIDASSYS/bin

where the assignment line (Bourne shell, and BASH shell) ensures
that multiple entries are not added on the PATH even if the script is more
than once.

POSIX also goes on to say that links from /opt/bin can be made if desired. 
I find this
usefull if a package has only one or two executables, and I don't to make
multiple
versions available.

I hope that the POSIX ideas are usefull,

John.

> > Here's my suggestion
> > MIDASSYS=/opt/midas-1.9.4 (for example)   
> 
> I guess we should follow the "standard" as much as possible. MIDASSYS was
inspired by
> ROOTSYS. Now where do people usually install ROOT? Is it /opt/root-x.x.x
or something
> else. Some years ago (when I did the last time some linux administration)
optional
> packages were put into /usr/local by default. I guess you have more
experience with
> today's tradition, so do whatever you thing is standard.
> 
> > so the Linux binaries would go to 
> > MIDASHOST=i386-pc-linux-gnu
> > $MIDASSYS/$MIDASHOST/bin
> > $MIDASSYS/$MIDASHOST/lib
> 
> Does that mean that the path has to be modified to include
$MIDASSYS/$MIDASHOST/bin?
> If we put a link to /usr/local/bin, the path does not have to be modified.
What about
> shared libraries? Does ldconfig know about /usr/local/lib, or
$MIDASYS/$MIDASHOST/lib?
> 
> > and the shared stuff would go to 
> > $MIDASSYS/include
> > $MIDASSYS/share/drivers
> > $MIDASSYS/share/examples
> 
> What about /usr/share? Is that a common place for documentatino etc?
> 
> Thanks for your advice.
> 
> - Stefan
             Reply  12 Jul 2004, Stefan Ritt, , Introduction of environment variable MIDASSYS 
> With POSIX there is a written standard, which says that each pacakge goes in
> it's own
> directory under /opt. eg. /opt/midas.  Each package gets to define it's own
> structure
> within that directory.  One could imagine several versions installed at the
> same time
> /opt/midas/v1.9.2 and /opt/midas/v1.9.4 each with a bin, lib include etc. 
> Following the
> ROOT example, you could make a link from /opt/midas/pro to
> /opt/midas/v1.9.4, so that
> system files and login files are easy to maintain etc.  The basic idea is
> 
> MIDASSYS=/opt/midas/pro
> PATH=$PATH:$MIDASSYS/bin
> 
> though a more sophisticated approach is
> 
> MIDASSYS=/opt/midas/pro
> echo $PATH | grep -q $MIDASSYS || PATH=$PATH:$MIDASSYS/bin
> 
> where the assignment line (Bourne shell, and BASH shell) ensures
> that multiple entries are not added on the PATH even if the script is more
> than once.

That sounds all very good to me. So can you please sit together (at least John,
Piotr, and Pierre-Andre), discuss a common scheme and and propose it officially in
this forum for comments. After a week or so, it should be implemented into the
Makefile and installation scripts. I also would like to have Paul Knowles giving
it a look, since he voluteered to make the midas RPMs, which also heavily depends
on the chosen directory structure.
       Reply  20 Jul 2004, Konstantin Olchanski, , Introduction of environment variable MIDASSYS 
> > Starting from midas version 1.9.4 on, the environment variable 'MIDASSYS' ...
> 2. What will the entire structure tree look like?
> 
> Here's my suggestion
> MIDASSYS=/opt/midas-1.9.4 (for example)   

Where should MIDAS be installed?

After looking at the LSB and at the FHS, it appears that the standards permit all of:
1) /opt/midas...
2) /usr/{bin,lib,...}
3) /usr/local/{bin,lib,...}

Some handy references:
http://www.pathname.com/fhs/pub/fhs-2.3.html
http://www.linuxbase.org/spec/

The "example LSB-compliant packages" appear to install into /opt/lsb, but I do not see
 any guidance as to where "my" packages should go.

Then, after some googling, I see that IBM "recommends" /opt (see
http://www-106.ibm.com/developerworks/linux/library/l-lsb.html):

begin-quote---
To avoid name space collisions when installing LSB-conforming applications, the
applications belonging to the base operating system or the distribution are to be
installed in /sbin/, /bin/, or /usr/. System administrators can build packages from
source and install them into the /usr/local/ directory. However, third-party packages
of add-on software must be installed in /opt/<package>/, where <package> is the name
that describes a software suite.
end-quote---


K.O.
          Reply  21 Jul 2004, Stefan Ritt, , Introduction of environment variable MIDASSYS 
> Where should MIDAS be installed?

I personally don't have any preference, as long as it's in accordance with "the standard"
(whatever this is). Maybe one should add a flag to the makefile to specify the
installation directory, either /opt or /usr/local, so people then have the choice. I have
seen that in other packages. As for the RPM, I leave the final proposal to the person
writing the spec file (Paul? Piotr? Konstantin?). We should then commonly agree on the
location based on that proposal. The person supplying the RPM will "officially" become the
RPM maintainer and be responsible for maintaining it.

> installed in /sbin/, /bin/, or /usr/. System administrators can build packages from
> source and install them into the /usr/local/ directory. However, third-party packages
> of add-on software must be installed in /opt/<package>/, where <package> is the name
> that describes a software suite.

Well, midas is kind of in the middle. On one hand it's a third-party package (-> /opt),
but it requires some compilation to allow meaningful work (frontend, analyzer). So maybe
the RPM should go to /opt, and if compiled from the TAR ball it should go to /usr/local?
But that means if someone has to maintain a large basis of midas machines, he/she has to
always search two locations. On the other hand one can alway do a "cd $MIDASSYS" ...

- Stefan
Entry  05 Dec 2003, Konstantin Olchanski, , HOWTO setup MIDAS ROOT tree analysis 
> root -l
root> TFile *f = new TFile("run00064.root")
root> TTree *t = f->Get("Trigger")
root> t->StartViewer() // look at the ROOT TTree
root> t->MakeSelector() // generates Trigger.h, Trigger.C

edit run.C, the main program:
{
gROOT->Reset();
TFile f("data/run00064.root");
TTree *t = f.Get("Trigger");

TH1D* adc8 = new TH1D("adc8","ADC8",1500,0,1500-1);
TH1D* tdc2 = new TH1D("tdc2","TDC2",1500,0,1500-1);
TH2D* h12 = new TH2D("h2","ADC8 vs TDC2",100,0,1500,100,0,1500);
TH2D* h12cut = new TH2D("h2cut","ADC8 vs TDC2",50,0,1000-1,50,0,1500);

TSelector *s = TSelector::GetSelector("Trigger.C");
t->Process(s);

adc8->Draw();
tdc2->Draw();
h12->Draw();
h12cut->Draw();
}

edit Trigger.C:

Bool_t Trigger::ProcessCut(Int_t entry)
{
  fChain->GetTree()->GetEntry(entry);
  if (entry%100 == 0) printf("entry %d\r",entry);
  return kTRUE;
}

void Trigger::ProcessFill(Int_t entry)
{
  adc8->Fill(ADCS_ADCS[8]);
  tdc2->Fill(TDCS_TDCS[2]);
  h12->Fill(TDCS_TDCS[2],ADCS_ADCS[8]);
  if (ADCS_ADCS[8] > 100)
    h12cut->Fill(TDCS_TDCS[2],ADCS_ADCS[8]);
}

Run the analysis:

root -l
root> .x run.C

K.O.
    Reply  20 Jul 2004, Konstantin Olchanski, , HOWTO setup MIDAS ROOT tree analysis 
Updating the instructions to ROOT version 3.10.2. Example is from TRIUMF-KOPIO
tree analysis.

shell> root -l
root> TFile *f = new TFile("run00064.root")
root> Trigger->MakeSelector("TriggerSelector")  // "Trigger" is the tree name
inside the root file. Generates TriggerSelector.h and TriggerSelector.cpp

= edit run.C, the main program:

{
gROOT->Reset();
TSelector *s = TSelector::GetSelector("TriggerSelector.C");

TChain chain("Trigger"); // "Trigger" is the tree name inside the root files
chain.Add("run03016.root"); // can chain multiple files

TH1D* tdc2 = new TH1D("tdc2","TDC2",1500,0,1500-1);

chain.Process(s,"",500); // process 500 events
//chain.Process(s); // or process all events

tdc2->Draw();
}

= edit TriggerSelector.h:

in the TriggerSelector class members, i.e. "UInt_t TDC1_TDC1[47];" edit the
array size to be bigger than the maximum possible bank size

= edit TriggerSelector.C:

Bool_t TriggerSelector::Process(Int_t entry)
{
  fChain->GetTree()->GetEntry(entry);

  if (entry%100 == 0)
    printf("process %d, nTDC %3d, 0x%08x\n",entry,TDC1_nTDC1,TDC1_TDC1[1]);

  tdc2->Fill(TDC1_nTDC1);
  return kTRUE;
}

= Run the analysis:

shell> root -l
root> .x run.C
 
K.O.
Entry  05 Dec 2003, Konstantin Olchanski, , HOWTO setup MIDAS ROOT tree analysis 
> root -l
root> TFile *f = new TFile("run00064.root")
root> TTree *t = f->Get("Trigger")
root> t->StartViewer() // look at the ROOT TTree
root> t->MakeSelector() // generates Trigger.h, Trigger.C

edit run.C, the main program:
{
gROOT->Reset();
TFile f("data/run00064.root");
TTree *t = f.Get("Trigger");

TH1D* adc8 = new TH1D("adc8","ADC8",1500,0,1500-1);
TH1D* tdc2 = new TH1D("tdc2","TDC2",1500,0,1500-1);
TH2D* h12 = new TH2D("h2","ADC8 vs TDC2",100,0,1500,100,0,1500);
TH2D* h12cut = new TH2D("h2cut","ADC8 vs TDC2",50,0,1000-1,50,0,1500);

TSelector *s = TSelector::GetSelector("Trigger.C");
t->Process(s);

adc8->Draw();
tdc2->Draw();
h12->Draw();
h12cut->Draw();
}

edit Trigger.C:

Bool_t Trigger::ProcessCut(Int_t entry)
{
  fChain->GetTree()->GetEntry(entry);
  if (entry%100 == 0) printf("entry %d\r",entry);
  return kTRUE;
}

void Trigger::ProcessFill(Int_t entry)
{
  adc8->Fill(ADCS_ADCS[8]);
  tdc2->Fill(TDCS_TDCS[2]);
  h12->Fill(TDCS_TDCS[2],ADCS_ADCS[8]);
  if (ADCS_ADCS[8] > 100)
    h12cut->Fill(TDCS_TDCS[2],ADCS_ADCS[8]);
}

Run the analysis:

root -l
root> .x run.C

K.O.
    Reply  20 Jul 2004, Konstantin Olchanski, , HOWTO setup MIDAS ROOT tree analysis 
Updating the instructions to ROOT version 3.10.2. Example is from TRIUMF-KOPIO
tree analysis.

shell> root -l
root> TFile *f = new TFile("run00064.root")
root> Trigger->MakeSelector("TriggerSelector")  // "Trigger" is the tree name
inside the root file. Generates TriggerSelector.h and TriggerSelector.cpp

= edit run.C, the main program:

{
gROOT->Reset();
TSelector *s = TSelector::GetSelector("TriggerSelector.C");

TChain chain("Trigger"); // "Trigger" is the tree name inside the root files
chain.Add("run03016.root"); // can chain multiple files

TH1D* tdc2 = new TH1D("tdc2","TDC2",1500,0,1500-1);

chain.Process(s,"",500); // process 500 events
//chain.Process(s); // or process all events

tdc2->Draw();
}

= edit TriggerSelector.h:

in the TriggerSelector class members, i.e. "UInt_t TDC1_TDC1[47];" edit the
array size to be bigger than the maximum possible bank size

= edit TriggerSelector.C:

Bool_t TriggerSelector::Process(Int_t entry)
{
  fChain->GetTree()->GetEntry(entry);

  if (entry%100 == 0)
    printf("process %d, nTDC %3d, 0x%08x\n",entry,TDC1_nTDC1,TDC1_TDC1[1]);

  tdc2->Fill(TDC1_nTDC1);
  return kTRUE;
}

= Run the analysis:

shell> root -l
root> .x run.C
 
K.O.
Entry  05 Dec 2003, Konstantin Olchanski, , HOWTO setup MIDAS ROOT tree analysis 
> root -l
root> TFile *f = new TFile("run00064.root")
root> TTree *t = f->Get("Trigger")
root> t->StartViewer() // look at the ROOT TTree
root> t->MakeSelector() // generates Trigger.h, Trigger.C

edit run.C, the main program:
{
gROOT->Reset();
TFile f("data/run00064.root");
TTree *t = f.Get("Trigger");

TH1D* adc8 = new TH1D("adc8","ADC8",1500,0,1500-1);
TH1D* tdc2 = new TH1D("tdc2","TDC2",1500,0,1500-1);
TH2D* h12 = new TH2D("h2","ADC8 vs TDC2",100,0,1500,100,0,1500);
TH2D* h12cut = new TH2D("h2cut","ADC8 vs TDC2",50,0,1000-1,50,0,1500);

TSelector *s = TSelector::GetSelector("Trigger.C");
t->Process(s);

adc8->Draw();
tdc2->Draw();
h12->Draw();
h12cut->Draw();
}

edit Trigger.C:

Bool_t Trigger::ProcessCut(Int_t entry)
{
  fChain->GetTree()->GetEntry(entry);
  if (entry%100 == 0) printf("entry %d\r",entry);
  return kTRUE;
}

void Trigger::ProcessFill(Int_t entry)
{
  adc8->Fill(ADCS_ADCS[8]);
  tdc2->Fill(TDCS_TDCS[2]);
  h12->Fill(TDCS_TDCS[2],ADCS_ADCS[8]);
  if (ADCS_ADCS[8] > 100)
    h12cut->Fill(TDCS_TDCS[2],ADCS_ADCS[8]);
}

Run the analysis:

root -l
root> .x run.C

K.O.
    Reply  20 Jul 2004, Konstantin Olchanski, , HOWTO setup MIDAS ROOT tree analysis 
Updating the instructions to ROOT version 3.10.2. Example is from TRIUMF-KOPIO
tree analysis.

shell> root -l
root> TFile *f = new TFile("run00064.root")
root> Trigger->MakeSelector("TriggerSelector")  // "Trigger" is the tree name
inside the root file. Generates TriggerSelector.h and TriggerSelector.cpp

= edit run.C, the main program:

{
gROOT->Reset();
TSelector *s = TSelector::GetSelector("TriggerSelector.C");

TChain chain("Trigger"); // "Trigger" is the tree name inside the root files
chain.Add("run03016.root"); // can chain multiple files

TH1D* tdc2 = new TH1D("tdc2","TDC2",1500,0,1500-1);

chain.Process(s,"",500); // process 500 events
//chain.Process(s); // or process all events

tdc2->Draw();
}

= edit TriggerSelector.h:

in the TriggerSelector class members, i.e. "UInt_t TDC1_TDC1[47];" edit the
array size to be bigger than the maximum possible bank size

= edit TriggerSelector.C:

Bool_t TriggerSelector::Process(Int_t entry)
{
  fChain->GetTree()->GetEntry(entry);

  if (entry%100 == 0)
    printf("process %d, nTDC %3d, 0x%08x\n",entry,TDC1_nTDC1,TDC1_TDC1[1]);

  tdc2->Fill(TDC1_nTDC1);
  return kTRUE;
}

= Run the analysis:

shell> root -l
root> .x run.C
 
K.O.
Entry  14 Jul 2004, Piotr Zolnierczuk, , future direction discussion? 
Hi,
  I think that rather than spending too much time on where to 
put files and how to define the environment - I am guilty of that myself.
We should be perhaps have some discussion on the future of MIDAS.

Are we ready for 2.0? 
Stefan - do you have any ideas/enhancements?

1) For one I would like to explore memory mapping (mmap()) on Linux
- I've used it once upon a time on DEC OSF/1 and I found it really
nice compared to shared memory. 
From a user standpoint it behaves as a shared memory but is mapped 
to a real file that can be easily "removed" when neccessary. 
One really annoying thing in MIDAS is when it goes ballistic
the cleanup which is somewhat tricky. 
The question if there is any performance penalty associated

2) Expanding hardware support: 
   a) custom microcontrolers?
   b) more hardware
   c) how about a "standard" Linux device /dev/midas
   for various PCI cards (PCI<->CAMAC) (PCI<->VME) 

3) I have never really seen a midas deployment that uses interrupts. 
I do understand the ease of polling and the fact that these days
CPU's are cheap but sometimes it is important to use interrupts.
Any examples/experience?

?)

Piotr
    Reply  14 Jul 2004, Piotr Zolnierczuk, , future directions discussion? 
Sorry the previous message got mangled:

Hi, 
I think that rather than spending too much time on where to put files 
and how to define the environment - I am guilty of that myself -  we should 
perhaps have some discussion on the future of MIDAS. 

Are we ready for 2.0? 

Stefan - do you have any ideas/enhancements? 

1) For one I would like to explore memory mapping (mmap()) on Linux.
I've used it once upon a time on DEC OSF/1 and I found it really nice 
compared to shared memory. From a user standpoint it behaves as a shared
memory but is mapped to a real file that can be easily "removed" when 
neccessary. One really annoying thing in MIDAS is when it goes ballistic 
the cleanup is somewhat tricky. 
The question if there is any performance penalty associated 

2) Expanding hardware support: 
  a) custom microcontrolers? 
  b) other hardware
  c) how about a "standard" Linux device /dev/midas for various 
  PCI cards (PCI<->CAMAC) (PCI<->VME) 

3) I have never really seen a midas deployment that uses interrupts. 
I do understand the ease of polling and the fact that these days CPU's 
are cheap but sometimes it is important to use interrupts. 

Any examples/experience? 

?) 

Piotr
    Reply  14 Jul 2004, Stefan Ritt, , future direction discussion? 
Have changed your entry as Non-HTML (easier to reply to...)

Here are some "initial" comments, by no means complete...

> Are we ready for 2.0? 
> Stefan - do you have any ideas/enhancements?

A big thing along the horizon I see is the ROME environment
(http://midas.psi.ch/rome/). So we will move away from PAW to ROOT. Although
the DAQ part will stay untouched, the whole analysis back-end changes,
including some XML configuration and MySQL support. I guess that would justify
a 2.0. I will discuss this at TRIUMF when I come in September, see how useful
ROME is for other users...

> 1) For one I would like to explore memory mapping (mmap()) on Linux
> - I've used it once upon a time on DEC OSF/1 and I found it really
> nice compared to shared memory. 
> From a user standpoint it behaves as a shared memory but is mapped 
> to a real file that can be easily "removed" when neccessary. 
> One really annoying thing in MIDAS is when it goes ballistic
> the cleanup which is somewhat tricky. 
> The question if there is any performance penalty associated

I guess there are no performance penalties, since under the hood both
techniques are handled similarly. The problem is that besides share memories
you need also semaphores to controll exclusive access to the memory, either
shm() funcitons or mmap(), so this would only fix half of the problem. I seem
to remember that mmap() was not available on some Ultix systems or so, but I
guess that's obsolete by now...

> 2) Expanding hardware support: 
>    a) custom microcontrolers?
>    b) more hardware
>    c) how about a "standard" Linux device /dev/midas
>    for various PCI cards (PCI<->CAMAC) (PCI<->VME) 

Well, you cannot develop hardware support for hardware you don't have, so the
policy up to now was that everyone developing some special drivers or hardware
support contributed it to the package. About c), we have already a CAMAC
driver standard, but at the user space level, so I don't see the benefit of
having kernel-mode standardized drivers. The only difference will be that the
debugging will be harder. VME standard is there in a kind of poor start right
now, but I expect to finish it this fall.

As for a), there is the MSCB system (http://midas.psi.ch/mscb) which has midas
support on the device driver and bus driver level, but I learned that
distributing hardware (or PCB designs if you like) is much harder than sharing
software. 

> 3) I have never really seen a midas deployment that uses interrupts. 
> I do understand the ease of polling and the fact that these days
> CPU's are cheap but sometimes it is important to use interrupts.
> Any examples/experience?

It's not only the "ease" of polling, but also that it's faster (in almost all
cases) and less troublesome. But hey, interrupt support is included in mfe.c,
so if you are fanatic about interrupts, please feel free to use them.
       Reply  15 Jul 2004, Konstantin Olchanski, , future direction discussion? 
> > Are we ready for 2.0? 

I disapprove of version number inflation. Why not go straight for midas version
3000-Pro-Z?

> A big thing along the horizon I see is the ROME environment
> (http://midas.psi.ch/rome/). So we will move away from PAW to ROOT. Although
> the DAQ part will stay untouched, the whole analysis back-end changes,
> including some XML configuration and MySQL support.

I looked at the ROME slides from Pierre, and it seems to suffer badly from the
second-system syndrome (read The Mythical Man-Month).

For us, it is important to get the data into a form where we can process it with
ROOT and I would prefer if we could concentrate in improving the (embryonic) ROOT
online analysis capabilities.

> > 1) For one I would like to explore memory mapping (mmap())

It is trivial to replace System-V shared memory with mmap(). I am surprised that
it has not been done yet. System-V semaphores are a little bit harder to get rid of.

> > 2) Expanding hardware support: 
> >    a) custom microcontrolers?
> >    b) more hardware
> >    c) how about a "standard" Linux device /dev/midas
> >    for various PCI cards (PCI<->CAMAC) (PCI<->VME) 

We cannot expect MIDAS Authors to provide drivers for all possible hardware.

At best, we can mount an effort to collect exisiting drivers from all MIDAS users
"out there" and to integrate them into MIDAS.

Even that is highly non-trivial- many drivers use non-portable native hardware
access interfaces (direct bit-banging on PPCs, VMIC library on VMIC Linux
machines, etc). We have already failed to create an efficient generic portable VME
access library.

> As for a), there is the MSCB system (http://midas.psi.ch/mscb)

I never saw the point of having tons of MIDAS code for MSCB hardware that nobody
has and nobody will ever have.

> > 3) I have never really seen a midas deployment that uses interrupts. 
>
> It's not only the "ease" of polling, but also that it's faster (in almost all
> cases) and less troublesome. But hey, interrupt support is included in mfe.c,
> so if you are fanatic about interrupts...

Interrupts are important when one cannot afford chewing up CPU cycles, memory
cycles, PCI cycles and VME cycles on polling.

As I understand, common CAMAC hardware does not generate interrupts- this explains
lack of examples and lack of interrupt use. At TRIUMF, our new VME hardware
supports interrupts and I have a VMIC-based setup where I can (and intend to) test
MIDAS support of interrupts.

K.O.
Entry  15 Jul 2004, Stefan Ritt, , Severe bug in 1.9.4 
Hello midas'ers,

Today I discovered a severe bug in the routine bm_check_buffers(), which
causes the logger to crash when it stops a run due to a reached event limit.
The funny thing is that this bug was there since the beginning, but only
recent versions of gcc and libc reveal it.

Since I consider this severe, I fixed it and updated 1.9.4 just now. I did
not go with 1.9.4-1, but maybe in future we should consider patch levels.

So please everybody who uses 1.9.4 and has problems with crashing loggers,
please update to 1.9.4 from today (July 15th, 2004).

- Stefan
ELOG V3.1.4-2e1708b5