Back Midas Rome Roody Rootana
  Midas DAQ System, Page 55 of 150  Not logged in ELOG logo
ID Date Author Topicdown Subject
  356   27 Feb 2007 Stefan RittForumevent builder scalability
> Our bottle neck is (a) compactPCI backplane reading data from waveform digitizers
> to the frontend CPUs and (b) CPU power on the frontend CPUs to analyzer the waveforms.

I forgot to mention that our front-ends at MEG are 2.8 GHz dual Xenon with Hyperthreading.
This gives "virtual" 4 CPU cores which are really necessary for waveform calibration and
analysis. It makes use of the new multi-threading feature in the midas front-end. I run
actually 7 threads (one VME readout, 4 calibration threads, one encoding thread and the
main thread sending data to the backend. This speeds up data taking by a factor of four
compared to a single thread. So if one plans for waveform analysis in the frontend to
reduce the data, I would recommend a box with dual quad cores.
  357   02 Mar 2007 Kevin LynchForumevent builder scalability
> Hi there:
> I have a question if there's anybody out there running MIDAS with event builder
> that assembles events from more that just a few front ends (say on the order of
> 0x10 or more)?
> Any experiences with scalability?
> 
> Cheers
>  Piotr

Mulan (which you hopefully remember with great fondness :-) is currently running
around ten frontends, six of which produce data at any rate.  If I'm remembering
correctly, the event builder handles about 30-40MB/s.  You could probably ping Tim
Gorringe or his current postdoc Volodya Tishenko (tishenko@pa.uky.edu) if you want
more details.  Volodya solved a significant number of throughput related
bottlenecks in the year leading up to our 2006 run.
  358   03 Mar 2007 Piotr ZolnierczukForumevent builder scalability
Hi all,
thank you for all responses. 

It seems that there's no problem running MIDAS with event builder assembling
data from ~10 front-ends. How about ~100? One possible solution is to have a
multi-tiered architecture. 

The reason I am asking is that we are in the process of designing an Ethernet
based DAQ system with front-ends running on embedded computers (Linux/ARM
CPU/Xilinix FPGA) and MIDAS is one of my options as a DAQ framework.
I am open for advice/suggestions.

Thanks again
  Piotr
  359   03 Mar 2007 Stefan RittForumevent builder scalability
> It seems that there's no problem running MIDAS with event builder assembling
> data from ~10 front-ends. How about ~100? One possible solution is to have a
> multi-tiered architecture. 
> 
> The reason I am asking is that we are in the process of designing an Ethernet
> based DAQ system with front-ends running on embedded computers (Linux/ARM
> CPU/Xilinix FPGA) and MIDAS is one of my options as a DAQ framework.
> I am open for advice/suggestions.

The event builder is a standalone application not part of the "midas core". It
receives data from N producers and combines the fragments into events based on
their serial number as a dedicated process. If it would become a bottleneck, it
can simply be redesigned and optimized. I made currently good experience with
multi-threaded applications running on multi-core CPUs. Implementing your
multi-tiered architecture as a multi-threaded event builder, where each of ten
threads receives data from ten front-ends, combines them and passes them to the
"collector thread" would make sense to me. Between the threads you can pass data
with many GB/sec, as compared to an ethernet-based architecture. I currently
implemented the rb_xxx functions inside midas.c which lets you pass data between
threads on a zero-copy basis.

Inside the core functions of midas there is no limitations whatsoever. All
counters etc. are 32-bit, so you can run 2^32 data consumers etc. You will first
hit the OS process limit. What I'm more concerned is your network bandwidth. If
you run 100 front-ends each with more than 1MB/sec, you would hit the 1GBit limit
of your network card. If you put more network interfaces, you will hit the disk
I/O limit which is around 100-200MB/sec even on larger RAID1 disk arrays (unless
you do data compression during event building). 

Another limit I see is the run transition. On each start/stop of a run, the
process which wants to start/stop the run has to contact all producers via a TCP
connection. Opening 100 TCP connection will take maybe 10-30 seconds, which is not
very convenient. A multi-threaded approach will help, but this is not (yet)
implemented, maybe you would have to do it yourself.

Another approach would be that you put the event building "in front of midas". All
your front-ends run a specific protocol outside of midas. They send their data to
a collecting process which acts as a single front-end to midas. So in the midas
framework you see only a single front-end, which gets it's data not from hardware,
but from 100 other nodes. This way you can optimize the protocol between your
front-end nodes and the collector process for your application. Run transitions
can be done through multicast UDP messages for example, which will even work with
1000 front-ends. But you have to implement that yourself.

I would start with the first approach: Taking the out-of-the box midas, see how
far I get. If you have access to a normal linux cluster, you can simply run ten
dummy front-ends on each of ten nodes, thus simulating 100 front-ends and see how
far you get. If the event builder is the bottle neck, do an optimization or
redesign. If the run transitions become your bottle neck, switch to method two. In
both ways you can utilize the downstream part of midas, like the logger, the
history system, etc. so you would still gain a lot compared to a design from scratch.

Best regards,

  Stefan
  368   10 Apr 2007 Dan GastlerForumInterrupt code for VME?
Hello, 
   Is there any example code for using midas for interrupt driven data
collection over VME? I am using a Struck SIS3100 PCI/VME setup to connect to my
VME crate.  Thanks,
  -Dan
  369   09 May 2007 Carl MetelkoForumSplitting data transfer and control onto different networks
Hi,
   I'm setting up a system with two networks with the intension of having
control info (odb, alarm) on the 192.168.0.x
and the frontend readout on 192.168.1.x

Is there any easy way of doing this?
I'm also trying to separate processes onto different machines, is there
any way to not have mserver,mhttpd and (mlogger,mevt) all run on the same
machine?
Thanks,
       Carl Metelko
  370   09 May 2007 Stefan RittForumSplitting data transfer and control onto different networks
Hi Carl,

so far I did not experience any problems of running odb&alarm on the same link as
the readout, since the data goes usually frontend->backend, and all other messages
from backend->frontend. So before you do something complicated, try it first the
easy way and check if you have problems at all. So far I don't know anybody who
did separate the network interfaces so I have not description for that.

You can however separate processes. The easiest is to buy a multi-core machine. If
you want to use however separate computers, note that receiving events over the
network is not very optimized. So you should run mserver connected to the frontend
, the event builder and mlogger on the same machine. mhttpd can easily live on
another machine, but there is not much CPU consumption from that (unless you don't
plot long history trends). Running mserver, the event builder and mlogger on the
same machine (dual Xenon mainboard) gave me easily 50 MB/sec (actually disk
limited), and not both CPUs were near 100%. If you put any receiving process (like
the event builder or mlogger or the analyzer) on a separate machine, you might see
a bottlened on the event receiving side of maybe 10MB/sec or so (never really
tried recently).

Best regards,

  Stefan

> Hi,
>    I'm setting up a system with two networks with the intension of having
> control info (odb, alarm) on the 192.168.0.x
> and the frontend readout on 192.168.1.x
> 
> Is there any easy way of doing this?
> I'm also trying to separate processes onto different machines, is there
> any way to not have mserver,mhttpd and (mlogger,mevt) all run on the same
> machine?
> Thanks,
>        Carl Metelko
  371   09 May 2007 Konstantin OlchanskiForumSplitting data transfer and control onto different networks
> I'm setting up a system with two networks with the intension of having
> control info (odb, alarm) on the 192.168.0.x
> and the frontend readout on 192.168.1.x

We have some experience with this at TRIUMF - the TWIST experiment we run with the main data 
generating frontends on a private network - it is a supported configuration and it works fine.

We ran into one problem after adding some code to the frontends for stopping the run upon detecting 
some data errors - stopping runs requires sending RPC transactions to every midas client, so we had to 
add static network routes for routing packets between midas nodes on the private network and midas 
nodes on the normal network.

> I'm also trying to separate processes onto different machines, is there
> any way to not have mserver,mhttpd and (mlogger,mevt) all run on the same machine?

mserver runs on the machine with the ODB shared memory by definition (think of it as "nfs server").

mhttpd typically runs on the machine with the ODB shared memory and until recently it had no code for 
connecting to the mserver. I recently fixed some of it, and now you can run mhttpd in "history mode" 
through the mserver. This is useful for offloading the generation of history plots to another cpu or 
another machine. In our case, we run the "history mhttpd" on the machine that holds the history files.

mlogger could be made to run remotely via the mserver, but presently it will refuse to do so, as it has 
some code that requires direct access to midas shared memory. If data has to be written to a remote 
filesystem, the consensus is that it is more efficient to run mserver locally and let the OS handle remote 
filesystem access (NFS, etc).

All other midas programs should be able to run remotely via the mserver.

K.O.
  375   14 May 2007 Carl MetelkoForumSplitting data transfer and control onto different networks
Hi,
   thanks for the advice. We do have dual core Xeons so we'll try running
most things on the server. Unless it proves to be a problem we'll run all
MIDAS signals on one network and NFS etc on the other.

I do have one more query about running systems like Konstantin.
What we would like to do is have a 'mirror' server serving multiple
online monitoring machines so that the load on the server is constant nomatter
the demands on the mirror.

Is there a way to set this up? Or would it be best to have a remote analyser
making short (1min) root files shared with the online monitoring? 
  382   07 Jun 2007 Randolf PohlForumcrash when analyzing multiple runs offline
Hello,

I am having a problem with the root-based analyzer. It crashes when I try to 
analyze multiple runs OFFLINE using the "-i run%05d.mid -o result%05d.root -r 
1 2" feature.

I can reproduce the problem with the example experiment which comes with the 
MIDAS distribution:
Running the analyzer ONLINE works fine: One can start and stop runs one after 
the other, roody shows the histograms being reset and then filled again and 
such.

But OFFLINE, the analyzer crashes when trying to analyze the SECOND run in a 
sequence. So
./analyzer -i run%05d.mid -o result%05d.root -r 1 1   works (only run 1)
./analyzer -i run%05d.mid -o result%05d.root -r 1 3   dies on run 2
Output attached (I added printf's to the "init"-modules, but that's irrelevant 
here)


My own analyzer shows the same effect. There I got the impression the segfault 
happens on the first attempt to Fill/Reset/SetName etc. a histogram in the 2nd 
run. But with the midas example it looks like the analyzer finishes filling 
histos even for run 2, but then dies in eor.

Can you reproduce the problem?

I run MIDAS on an Intel Quadcore, 64 bit SuSE Linux 10.2.
pohl@lamb2:~/midas/examples/root> gcc --version
gcc (GCC) 4.1.2 20061115 (prerelease) (SUSE Linux)

(maybe 4.1.2 "PRERELEASE" is the problem? See message ID 344)

I am using midas rev. 3674 (April 19, 2007), but I got the impression there 
has since not been a change relevant to this problem. Please correct me if I 
am wrong, then I would try it with Rev HEAD.
(My version includes already the fix to the x86_64 segfault problem of message 
ID 337)


Best regards,

Randolf
Attachment 1: crash.out
pohl@lamb:~/midas/examples/root> ./analyzer -e exa_root -i run%05d.mid -o /tmp/pohl/test%05d.root -r 1 3

analyzer_init

Root server listening on port 9090...
adc_calib_init
adc_summing_init
scaler_init
Running analyzer offline. Stop with "!"
Set run number 1 in ODB
Load ODB from run 1...
analyzer_init

OK
run00001.mid:777  /tmp/pohl/test00001.root:775  events, 0.00s
Set run number 2 in ODB
Load ODB from run 2...
analyzer_init

OK
run00002.mid:7227  /tmp/pohl/test00002.root:7225  events, 0.04s

 *** Break *** segmentation violation
Using host libthread_db library "/lib64/libthread_db.so.1".
Attaching to program: /proc/10558/exe, process 10558
[Thread debugging using libthread_db enabled]
[New Thread 47688414288800 (LWP 10558)]
[New Thread 1082132800 (LWP 10559)]
0x00002b5f52afae1f in waitpid () from /lib64/libc.so.6
Thread 2 (Thread 1082132800 (LWP 10559)):
#0  0x00002b5f5234d0ab in __accept_nocancel () from /lib64/libpthread.so.0
#1  0x00002b5f4e4cc510 in TUnixSystem::AcceptConnection ()
   from /usr/local/root/lib/root/libCore.so.5.14
#2  0x00002b5f4e4c1592 in TServerSocket::Accept () from /usr/local/root/lib/root/libCore.so.5.14
#3  0x000000000041075f in root_socket_server (arg=<value optimized out>) at src/mana.c:5453
#4  0x00002b5f5188428a in TThread::Function () from /usr/local/root/lib/root/libThread.so.5.14
#5  0x00002b5f5234609e in start_thread () from /lib64/libpthread.so.0
#6  0x00002b5f52b294cd in clone () from /lib64/libc.so.6
#7  0x0000000000000000 in ?? ()

Thread 1 (Thread 47688414288800 (LWP 10558)):
#0  0x00002b5f52afae1f in waitpid () from /lib64/libc.so.6
#1  0x00002b5f52aa3491 in do_system () from /lib64/libc.so.6
#2  0x00002b5f52aa3817 in system () from /lib64/libc.so.6
#3  0x00002b5f4e4d0851 in TUnixSystem::StackTrace ()
   from /usr/local/root/lib/root/libCore.so.5.14
#4  0x00002b5f4e4cfa4a in TUnixSystem::DispatchSignals ()
   from /usr/local/root/lib/root/libCore.so.5.14
#5  <signal handler called>
#6  0x00002b5f52ad5ee5 in free () from /lib64/libc.so.6
#7  0x000000000040c89b in CloseRootOutputFile () at src/mana.c:1489
#8  0x0000000000410b45 in eor (run_number=<value optimized out>, error=<value optimized out>)
    at src/mana.c:1981
#9  0x0000000000412d9b in analyze_run (run_number=2, 
    input_file_name=0x7fff5cafd020 "run00002.mid", output_file_name=<value optimized out>)
    at src/mana.c:4471
#10 0x00000000004130b4 in loop_runs_offline () at src/mana.c:4518
#11 0x0000000000413e05 in main (argc=<value optimized out>, argv=<value optimized out>)
    at src/mana.c:5757
#0  0x00002b5f52afae1f in waitpid () from /lib64/libc.so.6
[midas.c:1592:] cm_disconnect_experiment not called at end of program
  385   08 Jun 2007 Stefan RittForumcrash when analyzing multiple runs offline
Unfortunately I don't have time right now to debug the problem, but I could see
roughly what it could be. The analyzer crashes inside CloseRootOutputFile:

#5  <signal handler called>
#6  0x00002b5f52ad5ee5 in free () from /lib64/libc.so.6
#7  0x000000000040c89b in CloseRootOutputFile () at src/mana.c:1489

in the line 

    free(tree_struct.event_tree[i].branch);

If a "free" crashes, it might indicate that the memory beyond the allocated space
got corrupted. The branch gets allocated in book_ttree(), once for each
analyze_request[i]. The branch gets filled in write_event_ttree():

      /* fill tree both online and offline */
      if (!exclude_all)
         et->tree->Fill();

Maybe one should put printf debugging statements in these places to see what's
going on.
  386   09 Jun 2007 Randolf PohlForumcrash when analyzing multiple runs offline
Hello Stefan,

tree_struct.n_tree keeps counting up from run to run (in book_ttree). This should 
presumably not be the case, since CloseRootOutputFile() frees the trees at eor().

------------------- output ---------------------------
lamb@lamb2:~/midas/root_3705> ./analyzer -e 
exa_root -i /tmp/midas/examples/root/run%05d.mid -o /tmp/midas/run%05d.root -r 1 2
Root server listening on port 9090...
Running analyzer offline. Stop with "!"
book_ttree: tree_struct.n_tree = 1
book_ttree: tree_struct.n_tree = 2
Set run number 1 in ODB
Load ODB from run 1...OK
/tmp/midas/examples/root/run00001.mid:2722  /tmp/midas/run00001.root:2720  events, 
0.21s
book_ttree: tree_struct.n_tree = 3     <<---- !!!!
book_ttree: tree_struct.n_tree = 4
Set run number 2 in ODB
Load ODB from run 2...OK
/tmp/midas/examples/root/run00002.mid:2347  /tmp/midas/run00002.root:2345  events, 
0.18s

 *** Break *** segmentation violation
----------------- \output ----------------------------

Adding this one line fixes the segfault problem for the root example expt.

----------------- code -------------------------
lamb@lamb2:/data/software/midas/midas_3705/src/src> svn diff mana.c
Index: mana.c
===================================================================
--- mana.c      (revision 3705)
+++ mana.c      (working copy)
@@ -1496,6 +1496,7 @@
    /* delete event tree */
    free(tree_struct.event_tree);
    tree_struct.event_tree = NULL;
+   tree_struct.n_tree = 0;
 
    // go to ROOT root directory
    gROOT->cd();
---------------- \code ---------------------------

Please check if this gives the intended behaviour. I am not very familiar with the 
midas internals.

Unfortunately my own analyzer's segfault problem is not solved by this patch. I 
guess I have to keep searching for a bug on my side.....  :-)


Cheers,

Randolf
  387   10 Jun 2007 Stefan RittForumcrash when analyzing multiple runs offline
> tree_struct.n_tree keeps counting up from run to run (in book_ttree). This should 
> presumably not be the case, since CloseRootOutputFile() frees the trees at eor().

Yes this indeed a bug. I applied your change and committed the new code.
  388   11 Jun 2007 Randolf PohlForumcrash when analyzing multiple runs offline
Hello again,

just for the record, in case somebody else runs into the same problem...

I have hunted down "my" segfault problem to the fact that I book histograms not 
in <module>_init, but in <module>_bor. I have to do so, because only in bor do I 
know which histograms to book, as this information comes from the ODB (booking 
only histograms for CAMAC modules which were set to "read" in the ODB). The core 
dump happens on the first access (->Fill, ->SetName,...) of one of these histos 
in the 2nd run analyzed offline ("./analyzer -r n m").

In mana.c:bor (line 1854) is stated that "all ROOT objects created by user module 
bor() functions go to the output file", and then does a gManaOutputFile->cd();
Consequently, the histograms vanish after the file is closed, therefore the 
segfault when trying to access them in the 2nd run. (I keep track of existing 
histograms, only booking the missing histos in bor.)

The problem goes away with "gROOT->cd()" in <module>_bor, before fiddling with 
TFolders and booking the histogram.


I do, however, not really understand the intention why histos booked in bor() go 
to only the file, whereas histos booked in init() go to memory. Could you please 
comment briefly? Maybe I missed the most important point. And what about online 
mode, should this work?


Thanks a lot in advance,

Randolf
  389   11 Jun 2007 Stefan RittForumcrash when analyzing multiple runs offline
> I have hunted down "my" segfault problem to the fact that I book histograms not 
> in <module>_init, but in <module>_bor. I have to do so, because only in bor do I 
> know which histograms to book, as this information comes from the ODB (booking 
> only histograms for CAMAC modules which were set to "read" in the ODB). The core 
> dump happens on the first access (->Fill, ->SetName,...) of one of these histos 
> in the 2nd run analyzed offline ("./analyzer -r n m").
> 
> In mana.c:bor (line 1854) is stated that "all ROOT objects created by user module 
> bor() functions go to the output file", and then does a gManaOutputFile->cd();
> Consequently, the histograms vanish after the file is closed, therefore the 
> segfault when trying to access them in the 2nd run. (I keep track of existing 
> histograms, only booking the missing histos in bor.)
> 
> The problem goes away with "gROOT->cd()" in <module>_bor, before fiddling with 
> TFolders and booking the histogram.

ROOT has the strange concept of "current working directory", coming from the fact that
ROOT was written by Fortran and PAW people, being used to have directories and
subdirectories with a persistent state (not really object-oriented style). So one can
set the "current working directory" to the root (=memory) with gROOT->cd() and to a
subdirectory which will later be written into a file with gManaOutputFile->cd(). If
you do the first one, the histograms are created only in memory, while in the later
case they are also created in memory, but will later be written into the output file
in the routine CloseRootOutputFile(). So if you do a gROOT->cd() in <module>_bor,
these histograms will not be written to file. So I guess your solution is not a real
solution.

> I do, however, not really understand the intention why histos booked in bor() go 
> to only the file, whereas histos booked in init() go to memory. Could you please 
> comment briefly? Maybe I missed the most important point. And what about online 
> mode, should this work?

The root output file is opened in bor() and closed in eor(). For a histo to go to the
file, it must be booked after opening the file, that is after bor() in mana.c and
therefore after the gManaOutputFile->cd().

I agree with you that the current scheme is not satisfactory. When running online, you
want to keep the histos between the runs. When running offline, you delete and
re-create them for each run. It would be better to create all histos online and
offline under gROOT, and just copy them to gManaOutputFile before writing them. I have
to admit that this root code was never really used in a productive environment for
offline analysis, so there might be some issues here and there. Some people write
directly root files in the logger, and then do a root-only (without the midas
analyzer) analysis. Unfortunately I'm busy these days and cannot write any code right
now. But if you feel like something should be modified in mana.c, please send it to me
and I can incorporate it into the standard code.
  390   12 Jun 2007 Randolf PohlForumcrash when analyzing multiple runs offline
Hi

> So I guess your solution is not a real solution.

I was not precise enough on what I do. This way the histograms persist in memory, but 
they are also written to every file:

e.g. in module "trig_tdc":

  TDirectory *savedir = gDirectory;  // will restore this afterwards
  gROOT->cd();     // go to file

  // make sure we are in the right "analyzer module folder"
  TDC_Folder = (TFolder *) gROOT->FindObjectAny("trig_tdc");
  gHistoFolderStack->Add((TObject *) TDC_Folder);

  ...(loop over all TDCs, figure out which histos exist, and which need to be booked)

  open_subfolder("raw4208");
  hrTDC = h1_book(....);   // create histo in memory, but it shows up in the file, too.
  close_subfolder(); //raw4208

  // restore gHistoFolderStack (we added a folder when entering routine)
  gHistoFolderStack->Remove(gHistoFolderStack->Last());

  // restore current directory
  savedir->cd();

When deleting histos I do:

     gManaHistosFolder->RecursiveRemove(*pHisto);
    (*pHisto)->Delete();
    (*pHisto) = NULL;  // for my book-keeping of existing histos.

You don't have to clear the histos explicitly between runs. gManaHistosFolder does this 
magic to you.

> But if you feel like something should be modified in mana.c, please send it to me
> and I can incorporate it into the standard code.

No, the code is fine. I just wanted to explain my problem and a solution to it, because 
I thought that somebody might run into the same problem, too. 

Ciao,

Randolf
  395   12 Jul 2007 Konstantin OlchanskiForumMidas on a x86_64 - incompatible with x86_32
> We run 64-bit MIDAS on RHEL4 with 64-bit ROOT and everything generally works,
> except for compatibility problems with 32-bit MIDAS.
> 
> The big problem is that 64-bit and 32-bit ODB turned out to be incompatible ...

I have now identified 3 data structures that change size when compiled with "-m64":

EVENT_REQUEST: stores a pointer to a function. Pointer size is 4 bytes with -m32 and 8 bytes with -m64.
This structure is part of an array inside BUFFER_HEADER, resulting in a sizable size mismatch between 32
bit and 64 bit shared memory data buffers.

The fix is simple: the function pointer is not used anywhere. Replace is with a "DWORD unused_filler"
makes -m32 and -m64 data buffers compatible. (But breaks compatibility with previous -m64 compiled midas).

CHN_SETTINGS and CHN_STATISTICS: apparently, -m32 and -m64 GCC has different packing rules and in -m64
mode, 4 bytes of padding are added to these data structures. Size size mismatch appears to be benign,
but will result in "size mismatch" complaints from ODB.

The fix is simple: adding "__attribute__ ((__packed__))" to the definition of the data structure makes
-m64 identical to -m32.

The "svn diff" of changes involved is attached below.

The biggest problem here is that making 32-bit ODB and 64-bit ODB compatible requires breaking one or
the other (My proposed changes break the 64-bit version. Alternatively, one could add explicit padding
to these data structures and break the 32-bit ODB).

I think it is important to make 32-bit and 64-bit code compatible: at TRIUMF we have to use a mixed
environment because out latest host computers all run 64-bit Linux while all our VME processors and all
older machines can only run 32-bit code; this incompatibility causes us weekly headaches.

Any thoughts?

K.O.

(this output of svn diff is doctored for clarity)

ladd00:midas$ svn diff
Index: include/midas.h
===================================================================
--- include/midas.h     (revision 3744)
+++ include/midas.h     (working copy)
-   void (*dispatch) (HNDLE, HNDLE, EVENT_HEADER *, void *);
+   INT unused; // was void (*dispatch) (HNDLE, HNDLE, EVENT_HEADER *, void *);
 } EVENT_REQUEST;
 
--- include/msystem.h   (revision 3744)
+++ include/msystem.h   (working copy)

+#define PACKED __attribute__ ((__packed__))  <--- this goes into midas.h inside the #ifdef "we use GCC"
 
-typedef struct {
+typedef struct PACKED { ... CHN_SETTINGS
 
-typedef struct {
+typedef struct PACKED { ... CHN_STATISTICS
  396   13 Jul 2007 Stefan RittForumMidas on a x86_64 - incompatible with x86_32
> The biggest problem here is that making 32-bit ODB and 64-bit ODB compatible requires breaking one or
> the other (My proposed changes break the 64-bit version. Alternatively, one could add explicit padding
> to these data structures and break the 32-bit ODB).
> 
> I think it is important to make 32-bit and 64-bit code compatible: at TRIUMF we have to use a mixed
> environment because out latest host computers all run 64-bit Linux while all our VME processors and all
> older machines can only run 32-bit code; this incompatibility causes us weekly headaches.
> 
> Any thoughts?

I agree to make 32-bit and 64-bit compatible. In the long run, everything will be 64-bit, so I would suggest
in breaking the 32-bit ODB, add some padding there where needed, probably with some conditional compiling.
This ensures to keep the native 64-bit packing, which probably will be somehow optimized for 64-bit
architectures and therefore might be a bit faster in the long run, when most systems are 64-bit. After this
has been implemented and well tested, I would go with an official announcement of the 32-bit break in the ODB,
and release a new version, so people can update from a TAR file if necessary. Existing ODB's can be converted
to the new format by exporting them in XML form and importing them again after the upgrade.
  399   12 Aug 2007 Konstantin OlchanskiForumMidas on a x86_64 - incompatible with x86_32
> I agree to make 32-bit and 64-bit compatible. In the long run, everything will be 64-bit, so I would suggest
> in breaking the 32-bit ODB, add some padding there where needed, probably with some conditional compiling.

I now have the patches to implement this. Changes turned out to be minimal:

1) midas.h: remove unused field "dispatch" from EVENT_REQUEST and bump DATABASE_VERSION from 2 to 3
2) msystem.h: add 32-bit padding to CHN_STATISTICS and CHN_SETTINGS

(Pedantic note: the C/C++ languages permit compilers to arbitrary pad data members inside structures and one is
not supposed to rely on the specific layout of "struct"s, they could changing from day to day depending on
compiler vendor, version, 32/64 bit, optimization level, etc. This is quite silly, but I guess it was the only way
"they" could agree on a standard)

In practice, compilers are will behaved and one can follow simple rules and stay out of trouble.
1) if all data members are of the same size -> no padding
2) do not use "double" (64-bit) and "short" (16-bit), make all char[] arrays divisible by 4 -> size of everything
is 32-bit, see rule 1
3) if you have to use "short", they have to come in pairs to keep everything else aligned to 32-bit
4) if you have to use "double" (or uint64_t), keep them aligned to 64-bit, i.e. struct { int a,b,c; double x;} is
*bad* (4-byte padding may be added between c and x). struct { int a,b,c,d; double x; } is good.

Below are is "svn diff include/midas.h include/msystem.h". These changes have been tested on SL4 32-bit and
64-bit, SL5 32/64, F7 32/64 and SL4/ICC (Intel compiler) 32 bit and 64 bit.

The testing was done by adding checks on sizes of all struct's kept on ODB, i.e.
   assert(sizeof(CHN_SETTINGS        ) ==    640); // ODB v3 with padding
   assert(sizeof(CHN_STATISTICS      ) ==     32); // ODB v3 with padding
   ... etc ...

K.O.

ladd03:midas$ svn diff include/midas.h include/msystem.h
Index: include/midas.h
===================================================================
--- include/midas.h     (revision 3798)
+++ include/midas.h     (working copy)
@@ -38,7 +38,7 @@
  *  @{  */

 /* has to be changed whenever binary ODB format changes */
-#define DATABASE_VERSION 2
+#define DATABASE_VERSION 3

 /* MIDAS version number which will be incremented for every release */
 #define MIDAS_VERSION "2.0.0"
@@ -810,8 +810,6 @@
    short int event_id;           /**< event ID                        */
    short int trigger_mask;       /**< trigger mask                    */
    INT sampling_type;            /**< GET_ALL, GET_SOME, GET_FARM     */
-                                 /**< dispatch function */
-   void (*dispatch) (HNDLE, HNDLE, EVENT_HEADER *, void *);
 } EVENT_REQUEST;

 typedef struct {
Index: include/msystem.h
===================================================================
--- include/msystem.h   (revision 3798)
+++ include/msystem.h   (working copy)
@@ -454,6 +454,7 @@
    INT event_id;
    INT trigger_mask;
    DWORD event_limit;
+   INT pad; // FIXME 64-bit "double" should be 64-bit aligned
    double byte_limit;
    double tape_capacity;
    char subdir_format[32];
@@ -465,6 +466,7 @@
    double bytes_written;
    double bytes_written_total;
    INT files_written;
+   INT pad; // FIXME pad data structure to be 64-bit aligned
 } CHN_STATISTICS;

 typedef struct {
ladd03:midas$
  400   20 Aug 2007 Konstantin OlchanskiForumMidas on a x86_64 - incompatible with x86_32
> > I agree to make 32-bit and 64-bit compatible. In the long run, everything will be 64-bit, so I would suggest
> > in breaking the 32-bit ODB, add some padding there where needed, probably with some conditional compiling.
> 
> I now have the patches to implement this. Changes turned out to be minimal:
> 
> 1) midas.h: remove unused field "dispatch" from EVENT_REQUEST and bump DATABASE_VERSION from 2 to 3
> 2) msystem.h: add 32-bit padding to CHN_STATISTICS and CHN_SETTINGS

The padding of CHN_STATISTICS and CHN_SETTINGS is not working right - somehow mhttpd and mlogger keep recreating the
data in ODB and erasing the padding fields. I am looking into this.

K.O.
ELOG V3.1.4-2e1708b5