Back Midas Rome Roody Rootana
  Midas DAQ System, Page 115 of 136  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
ID Date Author Topic Subject
  424   05 Feb 2008 qinzeng pengForumrpc timeout, related to event_size and watch dog? need help
Dear all,

I'm trying to write a simulation code on midas. What I did is just modify the
frontend.c(pp) from experiment samples and made some parameters change on midas.h .

Because my simulation ask for about 4.5MB for each event, so I increase the
MAX_EVENT_SIZE and max_event_size accordingly.

in midas.h :

#define MAX_EVENT_SIZE 0xa00000 //0x400000 /**< maximum event size 4MB*/
#define BANKLIST_MAX 640 //64 /**< max # of banks in event */
#define DEFAULT_RPC_TIMEOUT 60000 //10000
#define WATCHDOG_INTERVAL 5000 //1000
#define DEFAULT_WATCHDOG_TIMEOUT 60000 /**< Watchdog */


in frontend.cpp :

BOOL frontend_call_loop = TRUE;
INT max_event_size = 5 * 1024 * 1024;
INT max_event_size_frag = 2* max_event_size;
INT event_buffer_size = 2 * max_event_size;

EQUIPMENT equipment[] = {

{"WFD_SIMU", /* equipment name */
{1, 0, /* event ID, trigger mask */
"SYSTEM", /* event buffer */
#ifdef USE_INT
EQ_INTERRUPT, /* equipment type */
#else
EQ_POLLED, /* equipment type */
#endif
LAM_SOURCE(0, 0xFFFFFF), /* event source crate 0, all stations */
"MIDAS", /* format */
TRUE, /* enabled */
RO_RUNNING, // | /* read only when running */
// RO_ODB, /* and update ODB */
5000, /* poll for 500ms */
0, /* stop run after this event limit */
0, /* number of sub events */
0, /* don't log history */
"", "", "",},
read_simu_event, /* readout routine */
},
......
}
INT frontend_loop()
{
/* if frontend_call_loop is true, this routine gets called when
the frontend is idle or once between every event */

ss_sleep(100);

return SUCCESS;
}


Compilation OK and running mlogger, odbedit, frontend is OK.
start the run -> no problem ( but there is a long waiting time in frontend if
starting the run. Before the run begins, frontend terminal popping up messages
frequently, say, every 10 seconds. When run starts, frontend terminal hang on
for a couple of minutes before popping up next bunch of messages.)

stop the run -> Problem -> rpc timeout

message from odbedit:

[qzpeng@phy2-dhcp140 simu]$ odbedit -s 10000000
12:54:27 [WFD Simu,INFO] Program WFD Simu on host phy2-dhcp140 started
12:54:37 [Logger,INFO] Program Logger on host phy2-dhcp140 started
[local:simu:S]/>start
Run number [1]: 7
Are the above parameters correct? ([y]/n/q):
Starting run #7
Run #7 started
[local:simu:R]/>stop
[midas.c:9231:rpc_client_call,ERROR] rpc timeout, routine = "rc_transition",
host = "phy2-dhcp140.bu.edu"
Error: Unknown error 504 from client 'WFD Simu' on host phy2-dhcp140.bu.edu
[local:simu:R]/>


runing message from frontend:

[qzpeng@phy2-dhcp140 simu]$ ./frontend
Frontend name : WFD Simu
Event buffer size : 10485760
System max event size : 10485760
User max event size : 5242880
User max frag. size : 10485760
# of events per buffer : 2

Connect to experiment...
OK
Init hardware...
......
  423   05 Feb 2008 Denis BilenkoInfopymidas 0.6.0 released - python bindings for Midas
Hi!

I have released pymidas - Python binding to Midas.
It includes support for Online Database, Buffer, event
construction and parsing. 

We have used it for a couple years now here at CMD. (http://cmd.inp.nsk.su)
One of principal DAQ applications here (Slow Control Frontend) is
written in Python using pymidas.

http://cmd.inp.nsk.su/~bilenko/projects/pymidas/pymidas.html
  422   05 Feb 2008 Stefan RittInfoImplementation of relative paths in mhttpd
A major change was made to mhttpd, changing all internal URLs to relative paths.
This allows proxy access to mhttpd via an apache server for example, which might
be needed to securely access an experiment from outside the lab through a
firewall. Following setting can be places into the Apache configuration,
assuming the experiment runs on machine "online1.your.domain", and apache on a
publically available machine "www.your.domain":

Redirect permanent /online1 http://www.your.domain/online1
ProxyPass /online1/ http://online1.your.domain/

<Location "/online1">
  AuthType Basic
  AuthName ...
  AuthUserFile ...
  Require user ...
</Location>

If the the URL http://www.your.domain/online1 is accessed, it gets redirected
(after optional authentication) to http://online1.your.domain. If you click on
the mhttpd history page for example, mhttpd would normally redirect this to 

http://online1.your.domain/HS/

but this is not correct since you want to go through the proxy www.your.domain.
The new relative redirection inside mhttpd now redirects the history page
correctly to

http://www.your.domain/onlin1/HS/

I had to change many places inside mhttpd to make this work, and I'm not 100%
sure if I covered all occurrences. So if you upgrade to mhttpd revision 4115 and
observe some error accessing some pages, please report it to me.

- Stefan
  421   05 Feb 2008 Stefan RittForumanalyzer crashes at high rates
> I'm using midas to read data from a waveform digitizer at event rates of
> 10-30kHz. To accomplish this the digitizer is read via Block transfers and the
> raw data put into a single MIDAS event.  Thus a MIDAS event could contain upto
> 250 physical events and at maximum 350kBytes.  In the analyzer modules I had
> been analyzing the first physics event contained in a MIDAS event with no
> problem.  Recently I tried to analyze all the physical events.  At low rates,
> 100hz-1khz, this was no problem, 1-5 physical events in a MIDAS event.  At
> higher rates 10-20kHz, where there are about 40physical events per MIDAS event,
>  the analyzer keeps up for a  few seconds then seg faults with " 'shared object
> read from target memory' has disappear; keeping it symbols".  Any suggestions as
> to why the analyzer is crashing would be very helpful.

I personally have never seen this error message. The analyzer is designed such that
it produces "back pressure" if the data rate is higher than the analysis rate and
you have "request all events" on. The only thing I can image are the following two
issues:

- At higher rate where you have more than 40 physical events per MIDAS event, there
is some bug in your analysis code which gets exploited only in that case. Maybe some
temporary array which is only 35 entries long or something like this.

- The back pressure mentioned above will slow down the frontend. If your computer
busy logic is not working correctly, you might get more triggers than you can
acquire. Maybe then the data gets screwed up and the analyzer chokes on it.

Finding the exact reason is not simple. For sure you have to run the analyzer inside
the debugger, to see exactly where the segfault happens. You then maybe have to
produce some dummy data in the frontend (like always sending the same event) to
disentangle some possible trigger problems from other problems.

Best regards,

  Stefan
  420   04 Feb 2008 Robert PattieForumanalyzer crashes at high rates
I'm using midas to read data from a waveform digitizer at event rates of
10-30kHz. To accomplish this the digitizer is read via Block transfers and the
raw data put into a single MIDAS event.  Thus a MIDAS event could contain upto
250 physical events and at maximum 350kBytes.  In the analyzer modules I had
been analyzing the first physics event contained in a MIDAS event with no
problem.  Recently I tried to analyze all the physical events.  At low rates,
100hz-1khz, this was no problem, 1-5 physical events in a MIDAS event.  At
higher rates 10-20kHz, where there are about 40physical events per MIDAS event,
 the analyzer keeps up for a  few seconds then seg faults with " 'shared object
read from target memory' has disappear; keeping it symbols".  Any suggestions as
to why the analyzer is crashing would be very helpful.

Thanks,

Robert     
  419   07 Jan 2008 Stefan RittInfoRoll-back for history sytem added
The midas history system always had the problem that the database can get
corrupted if the disk gets full where the history records (*.hst & *.idx) are
stored. This can happen if a history event can only be written partially on the
almost full disk. If later some space is freed up (by deleting other files), the
writing continues at the old position, leaving the partial event in the data
base. In that case the whole history data of the current day cannot be read
because it is corrupted.

To solve the problem, a roll-back system has been implemented in the
hs_write_event() function. If an event cannot be written fully, the history file
is restored to the old state, so the partial event is removed from the end of
the file via truncation. This way only the data which could not be written to
the disk is missing in the history file, but the other data from that day is
still valid and readable. The change has been committed in revision 4107.
  418   27 Nov 2007 Stefan RittInfoODB links to array elements implemented
In revision 4090 I implemented ODB links to individual array elements. Now you
can have for example:

Key name                        Type    #Val  Size  Last Opn Mode Value
---------------------------------------------------------------------------
array                           INT     10    4     2m   0   RWD
                                        [0]             0
                                        [1]             0
                                        [2]             123
                                        [3]             0
                                        [4]             0
                                        [5]             0
                                        [6]             0
                                        [7]             0
                                        [8]             0
                                        [9]             0
element2 -> /array[2]           INT     1     4     3m   0   RWD  123

In this case, the link "element2" points to the third element of "array", but is
treated like a single value. This links are very useful for example for the
"Edit on start" parameters, which can now point to individual array elements.
The same is true for the "Links BOR" when the logger writes to a MySQL database.

This modification required major modifications in the ODB. I have carefully
tested the example experiment from the distribution to verify that everything is
fine, but I'm not 100% sure that I covered all possible situations. So if you
update to revision 4090+ and you observe some strange behavior related to links
in the ODB, please report.

There are following two new functions related to this change: 

  db_get_link()
  db_get_link_data()

They are counterparts of db_get_key() and db_get_data(), respectively, but
without following links in the ODB. These functions are probably not of much use
outside odbedit and mhttpd, which are supposed to display links explicitly. Most
user applications want to follow links without even knowing that these are links.
  417   21 Nov 2007 Konstantin OlchanskiForumODBv3, second try - Midas on a x86_64 - incompatible with x86_32
These changes to make 32-bit and 64-bit ODB binary compatible with each other are now commited to midas svn, revision 4080.

Starting with this revision, ODB version changes from 2 to 3, breaking binary compatibility with previous releases.

Before upgrading to this revision, save your ODB as an XML file, *and* try to reload it, to catch any potential problems with parsing of the XML file.

Part of this commit are checks for sizes of important midas data structures stored in ODB shared memory - if the compiled size does not match the expected 
value, binary compatibility is broken and the program will abort - to avoid further corruption of ODB shared memory. This feature is only enabled on Linux and 
it is expected to trigger only on compiler malfunctions (generates wrong data size) and on accidental or intentional changes to important data structures in 
midas, to warn the user that they broke ODB binary compatibility.

K.O.

> > > > I agree to make 32-bit and 64-bit compatible. In the long run, everything will be 64-bit, so I would suggest
> > > > in breaking the 32-bit ODB, add some padding there where needed, probably with some conditional compiling.
> > > 1) midas.h: remove unused field "dispatch" from EVENT_REQUEST and bump DATABASE_VERSION from 2 to 3
> > > 2) msystem.h: add 32-bit padding to CHN_STATISTICS and CHN_SETTINGS
> 
> I am now trying a different solution of to fixing the issue of CHN_STATISTICS and CHN_SETTINGS changing size.
> 
> 1) midas.h: (same as before) remove unused field "dispatch" from EVENT_REQUEST and bump DATABASE_VERSION from 2 to 3
> 2) msystem.h: in CHN_STATISTICS and CHN_SETTINGS change type of "event_limit" and "files_written" from int to "double".
> 
> Below are the latest ODBv3 meta patches:
> 
> ladd03:midas$ svn diff
> Index: include/midas.h
> ===================================================================
> --- include/midas.h     (revision 3844)
> +++ include/midas.h     (working copy)
>  /* has to be changed whenever binary ODB format changes */
> -#define DATABASE_VERSION 2
> +#define DATABASE_VERSION 3
> .........
>     short int trigger_mask;       /**< trigger mask                    */
>     INT sampling_type;            /**< GET_ALL, GET_SOME, GET_FARM     */
> -                                 /**< dispatch function */
> -   void (*dispatch) (HNDLE, HNDLE, EVENT_HEADER *, void *);
>  } EVENT_REQUEST;
> 
> Index: include/msystem.h
> ===================================================================
> --- include/msystem.h   (revision 3845)
> +++ include/msystem.h   (working copy)
> -"Event limit = DWORD : 0",\
> +"Event limit = DOUBLE : 0",\
> ..................
> -"Files written = INT : 0",\
> +"Files written = DOUBLE : 0",\
> ..................
> -   DWORD event_limit;
> +   double event_limit;
> ..................
> -   INT files_written;
> +   double files_written;
> 
> K.O.
  416   20 Nov 2007 Konstantin OlchanskiInfomhdump: a standalone MIDAS history dump utility
> > I hope people find this program useful. If you have any feedback (patches, bug
> > reports, requests for improvements), please post them as replies to this forum
> > message.
> 
> I wouldn't mind putting this into the midas distribution. Put it under utils/, add
> an entry to the Makefile, and fix that warning:
> 
> 
> mhdump.cxx: In function `int readHstFile(FILE*)':
> mhdump.cxx:161: warning: comparison between signed and unsigned integer expressions

Done and done.

The program mhdump, a standalone decoder for midas history files, is now in midas svn.

K.O.
  415   17 Oct 2007 John M O'DonnellForumAdding MIDAS .root-files
The following program handles regular directories in a file, or folders (ugh).
Most histograms are added bin by bin.

For scaler events it is convenient to see the counts as a function of time (ala
sclaer history plots in mhttpd).  If the histogram looks like a scaler plot versus
time, then new bins are added on to the end (or into the middle!) of the histogram.

All different versions of cuts are kept.

TTrees are not explicitly supported, so probably don't do the right thing...

John.

> Dear MIDAS users,
> 
> I want to add several .root-files produced by the MIDAS analyzer, in a fast 
> and convenient way. ROOT's hadd fails because it does not know how to treat 
> TFolders. I guess this problem is not unique to me, so I hope that somebody of 
> you might already have found a solution.
> 
> Why don't I just run "analyzer -r 1 10000"?
> We have taken lots of runs under (rapidly) varying conditions, so it would be 
> lots of "-r". And the analysis is quite involved, so rerunning all data takes 
> about one hour on a fast PC making this quite painful.
> Therefore, I would like to rerun all data only once, and then add the result 
> files depending on different criteria.
> 
> Of course, I tried to write a script that does the adding. But somehow it is 
> incredibly slow. And I am not the Master Of C++, too.
> 
> Is there any deeper reason for MIDAS using TFolders, not TDirectorys? ROOT's 
> hadd can treat TDirectory. Can I simply patch "my" MIDAS? Is there general 
> interest in a change like this? (Does anyone have experience with the speed of 
> hadd?)
> 
> Looking forward to comments from the Forum.
> 
> Cheers,
> 
> Randolf
Attachment 1: histoAdd.cxx
#include <iostream>
#include <vector>
#include <iterator>
#include <cstring>
using namespace std;

#include "TROOT.h"
#include "TFile.h"
#include "TString.h"
#include "TDirectory.h"
#include "TObject.h"
#include "TClass.h"
#include "TKey.h"
#include "TH1.h"
#include "TAxis.h"
#include "TMath.h"
#include "TCutG.h"
#include "TFolder.h"

bool verbose (false);
TFolder *histosFolder (0);

//==============================================================================

void addObject (TObject *o, const TString &prefix, TFolder *opFolder=0);

/** called for each file to do the addition.
  * loops over each object in the file, and
  * uses addObject to dispatch the actuall addition.
  */

void add (TFile *file, const TString &prefix) {

//------------------------------------------------------------------------------

  TString dirName (file->GetName());
  if (verbose) cout << " scanning TFile" << endl;
  TString newPrefix (prefix + dirName + "/");

  TIter i (file->GetListOfKeys());
  while (TKey *k = static_cast<TKey *>( i())) {

    TObject *o (file->Get( k->GetName()));
    addObject( o, newPrefix);
  }

  return;
}

//==============================================================================

/** Most histograms are added bin by bin, but if simpleAdd == false,
  * the xaxis values are assumed different, in which case we look up
  * appropriate bin numbers, and use a new extended xaxis if needed.
  *
  * Use simpleAdd=false to accumulate scaler rate histograms.
  */
void add (const TH1 *newh, TH1 *&hsum, TFolder *opFolder) {

//------------------------------------------------------------------------------

  const bool simpleAdd (newh->GetXaxis()->GetTimeDisplay() ? false : true);

  if (!hsum) {
    hsum = (TH1 *)newh->Clone();
    TString title = "histoAdd: ";
    title += hsum->GetTitle();
    if (opFolder) hsum->SetDirectory( 0);
  }
  else if (simpleAdd) hsum->Add( newh);
  else { // extend axis - for 1D histos with equal sized bins

    size_t nBinsSum = hsum->GetNbinsX();
    size_t nBinsNew = newh->GetNbinsX();
    vector<Double_t>bin_contents;
    vector<Double_t>histo_edges;
    Int_t holder_bins;

    /* foundBin is either the overflow bin (if the 2 histograms don't overlap)
     * or it is the bin number in histoA that has the same low edge as the 
     * lowest bin edge in histoB. The only time that histograms can overlap 
     * is when the older scaler.cxx is used to create the histograms with 
     * fixed bin sizes. 
     */
    Int_t foundBin = hsum->FindBin( newh->GetBinLowEdge(1) );

    histo_edges.resize(foundBin);
    bin_contents.resize(foundBin);
    
    for( int i = 1; i <= foundBin; i++ ) {
      histo_edges[i-1] = hsum->GetBinLowEdge(i);
      bin_contents[i-1] = hsum->GetBinContent(i);
    }

    if(foundBin < nBinsSum)  { 
      //the histos overlap or we have already made holder bins
      holder_bins = 0;
    }
    else        {
      //create a "place holder" histo
      Int_t width = 10;
      holder_bins = (int)((newh->GetXaxis()->GetXmin() 
                    - hsum->GetXaxis()->GetXmax())/width);

      if( holder_bins < width ) holder_bins = width;

      TH1F *bin_holder = new TH1F("bin_holder", "bin_holder", holder_bins, 
                           hsum->GetXaxis()->GetXmax(), 
                           newh->GetXaxis()->GetXmin() );

      histo_edges.resize( foundBin + holder_bins );
      bin_contents.resize( foundBin + holder_bins );

      for( int i = 0; i < holder_bins; i++ )      {
        histo_edges[foundBin+i] = bin_holder->GetBinLowEdge(i+2);
        bin_contents[foundBin+i] = 0;
      }
      delete bin_holder;
    } //end else
  
    histo_edges.resize(  foundBin + holder_bins + nBinsNew+1 ); 
    bin_contents.resize( foundBin + holder_bins + nBinsNew+1 );

    for( int i = 0; i <= nBinsNew; i++ )   {
      histo_edges[i+foundBin+holder_bins] = newh->GetBinLowEdge(i+1);
      bin_contents[i+foundBin+holder_bins] = newh->GetBinContent(i+1);
    }

    hsum->SetBins( histo_edges.size()-1, &histo_edges[0] );

    for ( int i=1; i<histo_edges.size(); ++i) {
      hsum->SetBinContent( i, bin_contents[i-1]);
    }

    if (opFolder) {
      //opFolder->Remove( hsum);
      //opFolder->Add( exth);
      hsum->SetDirectory( 0);
    }
  }

  if (verbose) {
    if (simpleAdd) cout << " adding counts";
    else cout << " tagging on bins";
    cout << endl;
  }

  return;
}

//==============================================================================

/** Most cuts are written out just once, but if a cut is different from
  * the most recently written version of a cut with the same name, then
  * another copy of the cut is written out.  Thus if a cut changes during
  * a series of runs, all versions of the cut will be present in the
  * summed file.
  */
void add (const TCutG *o, TCutG *&oOut, TFolder *opFolder) {

//------------------------------------------------------------------------------

  const char *name (o->GetName());
  bool write (false);

  if (!oOut) write = true;
  else {

    Int_t n (o->GetN());
    if (n != oOut->GetN()) write = true;

    else {
      double x1, x2, y1, y2;
      for (Int_t i=0; i<n; ++i) {

        o   ->GetPoint( i, x1, y1);
        oOut->GetPoint( i, x2, y2);

        if ((x1 != x2) || (y1 != y2)) write = true;
      }
    }
  }

  if (write) {

    if (verbose) {
      if (oOut) cout << " changed";
      cout << " TCutG" << endl;
    }

    if (!opFolder) {

      oOut = const_cast<TCutG *>( o);
      o->Write( name, TObject::kSingleKey);

    } else {

      TCutG *clone = static_cast<TCutG *>( o->Clone());
      if (oOut) opFolder->Add( clone);
      oOut = clone;
    }
  } else if (verbose) cout << endl;

  return;
}

//==============================================================================

/** for most objects, we keep just the first version.
  */
void add (const TObject *o, TObject *&oSum, TFolder *opFolder) {

//------------------------------------------------------------------------------

  const char *name (o->GetName());

  if (!oSum) {
    if (verbose) cout << " saving TObject" << endl;
    if (!opFolder) o->Write( name, TObject::kSingleKey);
  } else {
    if (verbose) cout << endl;
  }

  return;
}

//==============================================================================

/** create the new directory and then start adding its contents
  */
void add (TDirectory *dir, TDirectory *&sumDir,
          TFolder *opFolder, const TString &prefix) {

//------------------------------------------------------------------------------

  TDirectory *currentDir (gDirectory);
  TString dirName (dir->GetName());
  if (verbose) cout << " scanning TDirectory" << endl;
  TString newPrefix (prefix + dirName + "/");

  if (!sumDir) sumDir = gDirectory->mkdir( dirName);
  sumDir->cd();

  TIter i (dir->GetListOfKeys());
  while (TKey *k = static_cast<TKey *>( i())) {

    TObject *o (dir->Get( k->GetName()));
    addObject( o, newPrefix, opFolder);
  }

  currentDir->cd();

  return;
}

//==============================================================================

/** create a new folder and then start adding its contents
  */
void add (const TFolder *folder, TFolder *&sumFolder,
          TFolder *parentFolder, const TString &prefix) {

//------------------------------------------------------------------------------

  if (verbose) cout << " scanning TFolder" << endl;
  const char *name (folder->GetName());
  TString newPrefix (prefix + name + "/");

  if (!sumFolder) sumFolder = new TFolder (name, name);
  if (!histosFolder) histosFolder = sumFolder;

  TIter i (folder->GetListOfFolders());
  while (TObject *o = i()) addObject( o, newPrefix, sumFolder);

  return;
}

//==============================================================================

int main (int argc, char **argv) {

//------------------------------------------------------------------------------

  if (argc < 3) {
    cerr << argv[0] << ": out_root_file in_root_file1 in_root_file2 ..." << endl;
    return 1;
  }

  TROOT root ("histoadd", "histoadd");
  root.SetBatch();

  TString opFileName (argv[1]);
  TFile *opFile (TFile::Open( opFileName, "RECREATE"));

  if (!opFile) {
    cerr << argv[0] << ": unable to open file: " << argv[1] << endl;
    return 1;
  }
  --argc;
  ++argv;
... 95 more lines ...
  414   17 Oct 2007 Stefan RittForumMulti-core CPUs
> I have this beautiful Intel Quadcore with fast disks, but MIDAS does obviously 
> only make use of one CPU at a time. Has anyboy of you already done some work 
> on making MIDAS parallel? Event-based data analysis should be the best 
> candidate for this.

There are ring buffer routines rb_xxx for distributed event analysis, but this is
currently only implemented in the front-end framework. These routines are pretty
simple, and their integration into the analyzer should not be very difficult.
Unfortunately I don't have time for that right now. We do our analysis such that we
analyze four different runs in parallel on a quadcore machine.

- Stefan
  413   17 Oct 2007 Randolf PohlForumMulti-core CPUs
Dear Forum,

I have this beautiful Intel Quadcore with fast disks, but MIDAS does obviously 
only make use of one CPU at a time. Has anyboy of you already done some work 
on making MIDAS parallel? Event-based data analysis should be the best 
candidate for this.

Has anybody done this with PVM? There is some PVM-related stuff in the MIDAS 
sources, but I got the impression this works only with HBOOK, not with ROOT. 
Or am I wrong?
But then PVM is probably also not the most efficient thing one ONE machine 
with multiple CPUs, right? And finally, with PVM we're back to 
adding .root-files efficiently (see my previous post).

Any thoughts?

Cheers,

Randolf
  412   17 Oct 2007 Randolf PohlForumAdding MIDAS .root-files
Dear MIDAS users,

I want to add several .root-files produced by the MIDAS analyzer, in a fast 
and convenient way. ROOT's hadd fails because it does not know how to treat 
TFolders. I guess this problem is not unique to me, so I hope that somebody of 
you might already have found a solution.

Why don't I just run "analyzer -r 1 10000"?
We have taken lots of runs under (rapidly) varying conditions, so it would be 
lots of "-r". And the analysis is quite involved, so rerunning all data takes 
about one hour on a fast PC making this quite painful.
Therefore, I would like to rerun all data only once, and then add the result 
files depending on different criteria.

Of course, I tried to write a script that does the adding. But somehow it is 
incredibly slow. And I am not the Master Of C++, too.

Is there any deeper reason for MIDAS using TFolders, not TDirectorys? ROOT's 
hadd can treat TDirectory. Can I simply patch "my" MIDAS? Is there general 
interest in a change like this? (Does anyone have experience with the speed of 
hadd?)

Looking forward to comments from the Forum.

Cheers,

Randolf
  411   11 Oct 2007 Stefan RittBug Report_syscall0 not available on gcc 4.1.1
> Dear Stephan,
> 
> I am writting on behalf of the LiBeRACE collaboration
> at Berkeley/Livermore.
> 
> We are trying to use midas (2.0.0) for our acquisition system.
> However we had some difficulties to compile it on LINUX Fedora
> Core 6 with gcc 4.1.1
> I tried to trace back the problem and I found that _syscall0 in
> system.c is actually an obsolete call (since gcc 4.x apparently).
> Playing with assembly language being behond my competence, I would 
> like to know if you ever came across this situation recently and
> if you have any suggestion(s).

The '_syscall0' function call was replaced by 'syscall' in SVN revision 3583. I
would recommend that you switch to the current SVN version (see
http://ladd00.triumf.ca/~daqweb/doc/midas/html/quickstart.html on how to obtain
the SVN version). If the problem still persists, please let us know.

- Stefan
  410   11 Oct 2007 Stefan RittBug Report_syscall0 not available on gcc 4.1.1
Dear Stephan,

I am writting on behalf of the LiBeRACE collaboration
at Berkeley/Livermore.

We are trying to use midas (2.0.0) for our acquisition system.
However we had some difficulties to compile it on LINUX Fedora
Core 6 with gcc 4.1.1
I tried to trace back the problem and I found that _syscall0 in
system.c is actually an obsolete call (since gcc 4.x apparently).
Playing with assembly language being behond my competence, I would 
like to know if you ever came across this situation recently and
if you have any suggestion(s).

With my best regards
Julien GIBELIN


------------------------------------------------------
GIBELIN Julien

Lawrence Berkeley National Laboratory
Nuclear Science Division
One Cyclotron Rd.
MS 88R0192
BERKELEY, CA 94720-8101

Tel: +1 (510) 495-2695
Fax: +1 (510) 486-7983
------------------------------------------------------
  409   08 Oct 2007 Stefan RittBug ReportError in data format- ending blocks on 32bit boundary x86_64
> Hi,
>     I found that midas banks can be given an extra 32 bits of zeros when
> trying to keep to 32bit boundary on my x86_64. 
> 
> This can be fixed by changing (in midas.h)
> #define ALIGN8(x)  (((x)+7) & ~7)
> to
> #define ALIGN8(x)  (((x)+3) & ~3)
> 
> Is there any bad consequences doing this?

Yes. ALIGN8 means 'align to 8-byte boundary' (64-bit), and if you change that, you
break the code at various locations. Furthermore, 8-byte aligned access is faster
on x86_64 than 4-byte aligned access, so you will get a performance penalty. If
course if you have very many small banks, the zero padding can cause some
overhead, but in that case you could combine some data into a single bank.
  408   08 Oct 2007 Carl MetelkoBug ReportError in data format- ending blocks on 32bit boundary x86_64
Hi,
    I found that midas banks can be given an extra 32 bits of zeros when
trying to keep to 32bit boundary on my x86_64. 

This can be fixed by changing (in midas.h)
#define ALIGN8(x)  (((x)+7) & ~7)
to
#define ALIGN8(x)  (((x)+3) & ~3)

Is there any bad consequences doing this?
  407   02 Oct 2007 Konstantin OlchanskiInfoROODY, ROOTANA updates
The ROODY online histogram viewer and the ROOTANA midas analyzer toolkit have been updated to work 
with ROOT version 5.16 and tested on Linux (SL4.4) and MacOS (10.4.10/PPC).

This update includes the library called "TNetDirectory" for access to remote ROOT objects. This library is 
still under development, but is complete enough for use with ROODY. To try it, please specify -P9091 in 
rootana and -Plocalhost:9091 in ROODY.

K.O.
  406   06 Sep 2007 Stefan RittInfoIntroduction of MIDAS_MAX_EVENT_SIZE
We had the problem that different experiments used different MAX_EVENT_SIZE
values (the MEG experiment actually 10 MB!). If each experiment changes the
value in midas.h and accidentally commits it, other experiments are affected.
Therefore I modified midas.h and the Makefile to accept a new environment
variable MIDAS_MAX_EVENT_SIZE. If this value is set, the Makefile passes it's
value to midas.h where it supersedes the default value which is currently at 4 MB.

PAA: Can you pleas add this to the documentation at the right spot? Thanks.
  405   03 Sep 2007 Stefan RittBug Reporthow to handle end of run?
> I am having problems with handling the end-of-run situation in my midas
> frontend. I have a device that continuously sends data (over USB) and I read
> this data in my "read_event" function.
> 
> Everything is good until the end-of-run, at which time this happens:
> 0) mfe.c calls my read_event() to read the data (loop until the end-of-run
> transition)
> 1) mfe.c calls my end_of_run()
> 2) here, I tell the device "please stop sending data"
> 3) all seems good, but wait!!!
> 4) there is all this data generated between step 0 and step 2 still sitting
> inside the device and it has nowhere to go: the run is ended, the output file is
> closed, my read_event() will never be called ever again (well, until the next run).
> 
> It seems to me mfe.c needs to have one more function, something like
> "pre_end_of_run()" that works like this:
> 0) mfe.c calls my read_event() to read the data (loop until the end-of-run
> transition)
> 1) mfe.c calls pre_end_of_run(), here I tell the device to stop sending data
> 2) mfe.c calls read_event() for the very last time, to give me the opportunity
> to read and send away any data I still may have.
> 3) mfe.c calls the end_of_run(). The run is truely finished.
> 
> Any thoughts?

You can achieve the desired functionality without changing mfe.c:

0) mfe.c calls read_event
1) mfe.c calls end_of_run. Your end_of_run tells the device to stop data and flushes
the remaining data. At this point you have to re-make actually a part of the mfe.c
functionality, but basically you need a bm_compose_event() and a bm_send_event(), so
just a few lines of code. If you want to have the final event number right in your
equipment, you also need to update eq->events_sent accordingly. 

Given the fact that 99% of the experiments do not need this functionality, I propose
that we keep mfe.c and you add the few lines of code into your user part of the
specific frontend.

Stefan
ELOG V3.1.4-2e1708b5