Back Midas Rome Roody Rootana
  Midas DAQ System, Page 52 of 152  Not logged in ELOG logo
Entry  07 Jun 2007, Randolf Pohl, Forum, crash when analyzing multiple runs offline crash.out
Hello,

I am having a problem with the root-based analyzer. It crashes when I try to 
analyze multiple runs OFFLINE using the "-i run%05d.mid -o result%05d.root -r 
1 2" feature.

I can reproduce the problem with the example experiment which comes with the 
MIDAS distribution:
Running the analyzer ONLINE works fine: One can start and stop runs one after 
the other, roody shows the histograms being reset and then filled again and 
such.

But OFFLINE, the analyzer crashes when trying to analyze the SECOND run in a 
sequence. So
./analyzer -i run%05d.mid -o result%05d.root -r 1 1   works (only run 1)
./analyzer -i run%05d.mid -o result%05d.root -r 1 3   dies on run 2
Output attached (I added printf's to the "init"-modules, but that's irrelevant 
here)


My own analyzer shows the same effect. There I got the impression the segfault 
happens on the first attempt to Fill/Reset/SetName etc. a histogram in the 2nd 
run. But with the midas example it looks like the analyzer finishes filling 
histos even for run 2, but then dies in eor.

Can you reproduce the problem?

I run MIDAS on an Intel Quadcore, 64 bit SuSE Linux 10.2.
pohl@lamb2:~/midas/examples/root> gcc --version
gcc (GCC) 4.1.2 20061115 (prerelease) (SUSE Linux)

(maybe 4.1.2 "PRERELEASE" is the problem? See message ID 344)

I am using midas rev. 3674 (April 19, 2007), but I got the impression there 
has since not been a change relevant to this problem. Please correct me if I 
am wrong, then I would try it with Rev HEAD.
(My version includes already the fix to the x86_64 segfault problem of message 
ID 337)


Best regards,

Randolf
    Reply  09 Jun 2007, Randolf Pohl, Forum, crash when analyzing multiple runs offline 
Hello Stefan,

tree_struct.n_tree keeps counting up from run to run (in book_ttree). This should 
presumably not be the case, since CloseRootOutputFile() frees the trees at eor().

------------------- output ---------------------------
lamb@lamb2:~/midas/root_3705> ./analyzer -e 
exa_root -i /tmp/midas/examples/root/run%05d.mid -o /tmp/midas/run%05d.root -r 1 2
Root server listening on port 9090...
Running analyzer offline. Stop with "!"
book_ttree: tree_struct.n_tree = 1
book_ttree: tree_struct.n_tree = 2
Set run number 1 in ODB
Load ODB from run 1...OK
/tmp/midas/examples/root/run00001.mid:2722  /tmp/midas/run00001.root:2720  events, 
0.21s
book_ttree: tree_struct.n_tree = 3     <<---- !!!!
book_ttree: tree_struct.n_tree = 4
Set run number 2 in ODB
Load ODB from run 2...OK
/tmp/midas/examples/root/run00002.mid:2347  /tmp/midas/run00002.root:2345  events, 
0.18s

 *** Break *** segmentation violation
----------------- \output ----------------------------

Adding this one line fixes the segfault problem for the root example expt.

----------------- code -------------------------
lamb@lamb2:/data/software/midas/midas_3705/src/src> svn diff mana.c
Index: mana.c
===================================================================
--- mana.c      (revision 3705)
+++ mana.c      (working copy)
@@ -1496,6 +1496,7 @@
    /* delete event tree */
    free(tree_struct.event_tree);
    tree_struct.event_tree = NULL;
+   tree_struct.n_tree = 0;
 
    // go to ROOT root directory
    gROOT->cd();
---------------- \code ---------------------------

Please check if this gives the intended behaviour. I am not very familiar with the 
midas internals.

Unfortunately my own analyzer's segfault problem is not solved by this patch. I 
guess I have to keep searching for a bug on my side.....  :-)


Cheers,

Randolf
    Reply  11 Jun 2007, Randolf Pohl, Forum, crash when analyzing multiple runs offline 
Hello again,

just for the record, in case somebody else runs into the same problem...

I have hunted down "my" segfault problem to the fact that I book histograms not 
in <module>_init, but in <module>_bor. I have to do so, because only in bor do I 
know which histograms to book, as this information comes from the ODB (booking 
only histograms for CAMAC modules which were set to "read" in the ODB). The core 
dump happens on the first access (->Fill, ->SetName,...) of one of these histos 
in the 2nd run analyzed offline ("./analyzer -r n m").

In mana.c:bor (line 1854) is stated that "all ROOT objects created by user module 
bor() functions go to the output file", and then does a gManaOutputFile->cd();
Consequently, the histograms vanish after the file is closed, therefore the 
segfault when trying to access them in the 2nd run. (I keep track of existing 
histograms, only booking the missing histos in bor.)

The problem goes away with "gROOT->cd()" in <module>_bor, before fiddling with 
TFolders and booking the histogram.


I do, however, not really understand the intention why histos booked in bor() go 
to only the file, whereas histos booked in init() go to memory. Could you please 
comment briefly? Maybe I missed the most important point. And what about online 
mode, should this work?


Thanks a lot in advance,

Randolf
    Reply  12 Jun 2007, Randolf Pohl, Forum, crash when analyzing multiple runs offline 
Hi

> So I guess your solution is not a real solution.

I was not precise enough on what I do. This way the histograms persist in memory, but 
they are also written to every file:

e.g. in module "trig_tdc":

  TDirectory *savedir = gDirectory;  // will restore this afterwards
  gROOT->cd();     // go to file

  // make sure we are in the right "analyzer module folder"
  TDC_Folder = (TFolder *) gROOT->FindObjectAny("trig_tdc");
  gHistoFolderStack->Add((TObject *) TDC_Folder);

  ...(loop over all TDCs, figure out which histos exist, and which need to be booked)

  open_subfolder("raw4208");
  hrTDC = h1_book(....);   // create histo in memory, but it shows up in the file, too.
  close_subfolder(); //raw4208

  // restore gHistoFolderStack (we added a folder when entering routine)
  gHistoFolderStack->Remove(gHistoFolderStack->Last());

  // restore current directory
  savedir->cd();

When deleting histos I do:

     gManaHistosFolder->RecursiveRemove(*pHisto);
    (*pHisto)->Delete();
    (*pHisto) = NULL;  // for my book-keeping of existing histos.

You don't have to clear the histos explicitly between runs. gManaHistosFolder does this 
magic to you.

> But if you feel like something should be modified in mana.c, please send it to me
> and I can incorporate it into the standard code.

No, the code is fine. I just wanted to explain my problem and a solution to it, because 
I thought that somebody might run into the same problem, too. 

Ciao,

Randolf
Entry  17 Oct 2007, Randolf Pohl, Forum, Adding MIDAS .root-files 
Dear MIDAS users,

I want to add several .root-files produced by the MIDAS analyzer, in a fast 
and convenient way. ROOT's hadd fails because it does not know how to treat 
TFolders. I guess this problem is not unique to me, so I hope that somebody of 
you might already have found a solution.

Why don't I just run "analyzer -r 1 10000"?
We have taken lots of runs under (rapidly) varying conditions, so it would be 
lots of "-r". And the analysis is quite involved, so rerunning all data takes 
about one hour on a fast PC making this quite painful.
Therefore, I would like to rerun all data only once, and then add the result 
files depending on different criteria.

Of course, I tried to write a script that does the adding. But somehow it is 
incredibly slow. And I am not the Master Of C++, too.

Is there any deeper reason for MIDAS using TFolders, not TDirectorys? ROOT's 
hadd can treat TDirectory. Can I simply patch "my" MIDAS? Is there general 
interest in a change like this? (Does anyone have experience with the speed of 
hadd?)

Looking forward to comments from the Forum.

Cheers,

Randolf
Entry  17 Oct 2007, Randolf Pohl, Forum, Multi-core CPUs 
Dear Forum,

I have this beautiful Intel Quadcore with fast disks, but MIDAS does obviously 
only make use of one CPU at a time. Has anyboy of you already done some work 
on making MIDAS parallel? Event-based data analysis should be the best 
candidate for this.

Has anybody done this with PVM? There is some PVM-related stuff in the MIDAS 
sources, but I got the impression this works only with HBOOK, not with ROOT. 
Or am I wrong?
But then PVM is probably also not the most efficient thing one ONE machine 
with multiple CPUs, right? And finally, with PVM we're back to 
adding .root-files efficiently (see my previous post).

Any thoughts?

Cheers,

Randolf
Entry  07 Mar 2008, Randolf Pohl, Bug Report, array overflows and other bugs out.0.make
Hi,

I have just compiled MIDAS svn 4132 on a fresh SuSE 10.3 x86_64 system and gcc 
found a bunch of bugs, I guess.

> uname -a
Linux pc 2.6.22.17-0.1-default #1 SMP 2008/02/10 20:01:04 UTC x86_64 x86_64 
x86_64 GNU/Linux

> gcc --version
gcc (GCC) 4.2.1 (SUSE Linux)

I see loads of warnings during compile, most of which I know from earlier 
compiles:
* warning: dereferencing type-punned pointer will break strict-aliasing rules
* warning: pointer targets in passing argument 3 of 'getsockname' differ in
           signedness

But then there is a new one (in fact lots of this one), and brief
inspection suggests this is a true bug with the possibility of truly
nasty consequences.

(1)=========================
src/midas.c:7398: warning: array subscript is above array bounds
Inspection of midas.c:

   if (i == MAX_DEFRAG_EVENTS) {
      /* no buffer available -> no first fragment received */
7398: free(defrag_buffer[i].pevent);
      memset(&defrag_buffer[i].event_id, 0, sizeof(EVENT_DEFRAG_BUFFER));
      cm_msg(MERROR, "bm_defragement_event",
             "Received fragment without first fragment (ID %d) Ser#:%d",
             pevent->event_id & 0x0FFF, pevent->serial_number);
      return;
   }

And midas.c line 7297:
EVENT_DEFRAG_BUFFER defrag_buffer[MAX_DEFRAG_EVENTS];

So, if(i==MAX_DEFRAG_EVENTS) free(defrag_buffer[i]).
I guess this is an off-by-one bug.

(2)==========================
src/midas.c:2958: warning: array subscript is above array bounds

   for (i = 0; i < 13; i++)
2958  if (trans_name[i].transition == transition)
         break;

Holy cow, hard-coded "13" in the code! Should be a #define, shouldn't it?

Now look at midas.c lines 94ff:
struct {
   int transition;
   char name[32];
} trans_name[] = {
   {
   TR_START, "START",}, {
   TR_STOP, "STOP",}, {
   TR_PAUSE, "PAUSE",}, {
   TR_RESUME, "RESUME",}, {
   TR_DEFERRED, "DEFERRED",}, {
0, "",},};

There is no trans_name[12].

The trans_name[12] shows up in line 2894 and 2790, too.

(3)=============================
mfe.c:
src/mfe.c:412: warning: array subscript is above array bounds
src/mfe.c:311: warning: array subscript is above array bounds
src/mfe.c:340: warning: array subscript is above array bounds

412: device_drv->dd(CMD_GET_DEMAND, device_drv->dd_info, i, 
          &device_drv->mt_buffer->channel[i].array[CMD_GET_DEMAND]);


   for (cmd = CMD_GET_FIRST; cmd <= CMD_GET_LAST; cmd++) {
 (..)
311:  device_drv->mt_buffer->channel[current_channel].array[cmd] = value;

   for (cmd = CMD_GET_FIRST; cmd <= CMD_GET_LAST; cmd++) {
 (..)
340:  device_drv->mt_buffer->channel[i].array[cmd] = value;


CMD_GET_DEMAND is in include/midas.h:
#define CMD_GET_DEMAND               CMD_GET_DIRECT  // = 20

I haven't even tried to understand mfe.c, nor did I read it. 
But I suspect the thing should always be something like
....array[cmd-CMD_GET_FIRST]
so array[] is indexed from [0], not from an arbitrary number that
depends on the number of commands you insert before line 698 in
midas.h. But please could the author of this check this very carefully?


(4)=========================
src/lazylogger.c:1957: warning: array subscript is below array bounds

if ((channel < 0) && (lazyinfo[channel].hKey != 0))

That is lazyinfo[something below zero].


(5)=============================
More warnings an expert might want to have a look at:

* warning: deprecated conversion from string constant to 'char*'

* src/fal.c:106: warning: non-local variable '<anonymous struct> out_info'
                 uses anonymous type
* src/fal.c:3064: warning: non-local variable '<anonymous struct> eb' uses
                  anonymous type

I attach the full output of make.
Could someone knowledgeable please have a look at these warnings and fix them?

They make me a bit nervous when thinking about data integrity, and
there are now so many that they actually start to hide serious stuff
like the ones I presented.

Oh, I got rid of the "dereferencing type-punned pointer" thing by adding
"-fno-strict-aliasing" as a compiler flag. Was suggested on the Web. Seemed to 
have worked during data taking (the data look reasonable :-). Is that a 
possible fix/workaround?


Cheers,

Randolf
Entry  21 Oct 2008, Randolf Pohl, Forum, Mixed CAMAC/VME frontend, SIS3100 
Dear MIDAS-addicts,

I would like to hear your opinion on this:
We've until now used CAMAC with Hytec 1331 controllers. We're using Yale FADCs 
whose readout takes ages in CAMAC (2048 samples take 2 milliseconds to be 
read). We've got 20+ FADC channels (we usually read only 2-3)

Now we've had the brilliant idea to replace the Yale FADCs with some VME 
digitizer and we now plan to buy a Struck SIS 1100/3100 PCI-VME controller,
plus 4 pc. CAEN 1720 8ch 12bit, 250MHz WFD.

(1) Can anybody comment on this choice? Good experiences/problems?

We are still using the CAMAC stuff for all other modules (TDCs, ADCs, 
scalers). So my plan is to have ONE frontend who reads both the CAMAC modules 
and the VME modules.

(2) Is it possible to build and run a dual-controller frontend for both CAMAC 
and VME? Does anybody have experience with that? Or is it a stupid idea?

I'd appreciate any hints.

[Edit: We're using Linux]

Thanks a lot,

Randolf
Entry  01 Dec 2008, Randolf Pohl, Bug Report, gcc warning in melog.c for midas 4401 
Hi all,

I have just compiled midas 4401 using SuSE 11.0.
gcc is some odd SuSE version:
gcc version 4.3.1 20080507 (prerelease) [gcc-4_3-branch revision 135036] (SUSE
Linux) 

Anyway, gcc stumbled over melog.c. I don't see the reason myself, but my
experience is that gcc is usually right when complaining about "array subscript
is above array bounds". So, just in case somebody knowlegeable wants to have a
look at this....

Cheers,

Randolf

The gcc output:

[...]
cc -g -O3 -Wall -Wuninitialized -Iinclude -Idrivers -I../mxml -Llinux/lib
-DINCLUDE_FTPLIB   -D_LARGEFILE64_SOURCE -DHAVE_MYSQL -I/usr/include/mysql
-DHAVE_ROOT -pthread -m64 -I/usr/local/root/root_v5.20.00/include/root
-DHAVE_ZLIB -DOS_LINUX -fPIC -Wno-unused-function -o linux/bin/melog
utils/melog.c linux/lib/libmidas.a -lutil -lpthread -lz
utils/melog.c: In function 'submit_elog':
utils/melog.c:224: warning: array subscript is above array bounds
utils/melog.c:224: warning: array subscript is above array bounds
utils/melog.c:224: warning: array subscript is above array bounds
utils/melog.c:224: warning: array subscript is above array bounds
utils/melog.c:224: warning: array subscript is above array bounds
utils/melog.c:224: warning: array subscript is above array bounds
utils/melog.c:224: warning: array subscript is above array bounds
utils/melog.c:224: warning: array subscript is above array bounds
cc -g -O3 -Wall -Wuninitialized -Iinclude -Idrivers -I../mxml -Llinux/lib
-DINCLUDE_FTPLIB   -D_LARGEFILE64_SOURCE -DHAVE_MYSQL -I/usr/include/mysql
-DHAVE_ROOT -pthread -m64 -I/usr/local/root/root_v5.20.00/include/root
-DHAVE_ZLIB -DOS_LINUX -fPIC -Wno-unused-function -o linux/bin/mlxspeaker
utils/mlxspeaker.c linux/lib/libmidas.a -lutil -lpthread -lz
Entry  27 Sep 2012, Randolf Pohl, Bug Fix, [PATCH] mana.c compile fix, gz files diff.mana
Hi,

I had to apply the attached patch to convince SuSE Linux 12.2 to compile mana.c
gcc version is "(SUSE Linux) 4.6.2"

Problem is that gz{write,close, etc.} expect a 1st argument of type gzFile (see
zlib.h), whereas out_file is FILE*. In fact, out_file is a cast to FILE*, even
in the case when we work on a gzfile (HAVE_ZLIB).

Could you please confirm that the patch is correct, and possibly apply it to trunk?

I haven't checked if mana works as advertised now.

Cheers,


Randolf
    Reply  01 Feb 2013, Randolf Pohl, Forum, analyzer cannot connect to the statistics database 
The simplest thing is probably to delete all files .[A-Z]*.SHM in the odb directory (the
one you specified in /etc/exptab).
This wipes the ODB, shared memory and all the other obscure stuff, giving you a clean,
fresh start.

Of course it wipes all the valuable stuff, too. That's why it's handy to sometimes open
odbedit and "save odb_<yyyymmdd>.odb". You can reload the thing after such a fatal 
"rm .[A-Z]*.SHM" 
    Reply  01 Apr 2013, Randolf Pohl, Info, Review of github and bitbucket 
And my 2ct:

Go for git!

I've been using git since 2007 or so, after cvs and svn. Git has some killer features which I can't miss any more:

* No central repo. Have all the history with you on the train.
* Branching and merging, with stable branches and feature branches.
  Happy hacking while my students do analysis on a stable version.
  Or multiple development branches for several features.
  And merging really works, including fixing up merge conflicts.
* "git bisect" for finding which commit introduced a (reproducible) bug.
* "gitk --all"

I use git for everything: Software, tex, even (Ooffice) Word documents.

Go for git. :-)

Randolf
    Reply  02 Apr 2013, Randolf Pohl, Info, Review of github and bitbucket 
Hi Konstantin,

> > * No central repo. Have all the history with you on the train.
> > * Branching and merging, with stable branches and feature branches.
> >   Happy hacking while my students do analysis on a stable version.
> >   Or multiple development branches for several features.
> 
> This is the part that worries me the most. Without a "central" "authoritative" repository,
> in just a few quick days, everybody will have their own incompatible version of midas.

No! This is probably one of the biggest misunderstandings of the git workflow.

You can of course _define_ one central repo: This is the one that you and Stefan decide to be "the source" (as
Linus does for the kernel). It's like the central svn repo: Only Stefan and you can push to it, and everybody
else will pull from it. Why should I pull MIDAS from some obscure source, when your "public" repo is available.

Look at the Linux Kernel: Linus' version is authoritative, even though everybody and his best friend has his
own kernel repo.

So, the main workflow does not change a lot: You collect patches, commit them, and "push" them to the central
repo. All users "pull" from this central repo. This is very much what svn offers.

> 
> I guess I am okey with your private midas diverging from mainstream, but when *I* end up
> with 10 different incompatible versions just in *my* repository, can that be good?

See above: _You_ define what the central repo is.

But: I _bet_ you will very soon have 10 versions in your personal repo, because _you choose_ to do so. It's
just SO much easier. The non-linear history with many branches is a _feature_. I can't live without it any more:


Looking at my MIDAS analyzer:

I have a "public" repo in /pub/git/lamb.git. This is where I publish my analyzer versions. All my collaborators
pull from this.

Then I have my personal repo in ~/src/lamb. 
This is where I develop. When I think something is ready for the public, I merge this branch into the public repo. 

Whenever I start to work on a new feature, I create a branch in my _local_ repo (~/src/lamb).  I can fiddle and
play, not affecting anybody else, because it never sees the public repo.
OK, collaborator A finds a bug. I switch to my local copy of the public version, fix the bug, and push the fix
to the publix repo. Then I go back to my (local) feature branch, merge the bug fix, and continue hacking.
Only when the feature is ready, I push it to the public repo.

Things get moe interesting as you work on several features simultaneously. You have e.g. 3 topic branches:
(a) is nearly ready, and you want a bunch of people to test it.
    push branch "feature (a)" to the public repo and tell the people which branch to pull.
(b) is WIP, you hack on it without affecting (a).
(c) is bug fixes which may or may not affect (a) or (b).
And so on.

You will soon discover the beauty of several parallel branches.

Plus, git merges are SO simple that you never think about "how to merge"

> 
> >   And merging really works, including fixing up merge conflicts.
> 
> But somebody still has to do it. With a central repository, the problem takes care of
> itself - each developer has to do their own merging - with svn, you cannot commit
> to the head without merging the head into your code first. But with git, I can just throw
> my changes int some branch out there hoping that somebody else would do the merging.
> But guess what, there aint anybody home but us chickens. We do not have a mad finn here
> to enforce discipline and keep us in shape...

See above: You will have the exact same workflow in git, if you like.




> As an example, look at the HADOOP/HDFS code development, they have at least 3 "mainstream"
> branches going, neither has all the features combined together and each branch has bugs with
> the fixes in a different branch. What a way to run a railroad.

I haven't look at this. All I can say: Branches are one of the best features.

> 
> > * "git bisect" for finding which commit introduced a (reproducible) bug.
> > * "gitk --all"
> >
> > Go for git. :-)
> 
> Absolutely. For me, as soon as I can wrap my head around this business of "who does all the merging".

Easy: YOU do it.

Keep going as in svn: Collect patches, and send them out.

And then, try "git checkout -b my_first_branch", hack, hack, hack,
"git merge master".

Best,

Randolf


> 
> K.O.
    Reply  03 Apr 2013, Randolf Pohl, Info, Review of github and bitbucket 
> > * "git bisect" for finding which commit introduced a (reproducible) bug.
> 
> I did not know this command, so I read about it. This IS WONDERFUL! I had once (actually with MSCB) the case that a bug was introduced i the last 100 
> revisions, but I did not know in which. So I checked out -1, -2, -3 revisions, then thought a bit, then tried -99, -98, then had the bright idea to try -50, then 
> slowly converged. Later I realised that I should have done a binary search, like -50, if ok try -25, if bad try -37, and so on to iteratively find the offending 
> commit. Finding that there is a command it git which does this automatically is great news.

even more so considering the nonlinear history (due to branching) in a regular git repo.
Entry  11 Feb 2014, Randolf Pohl, Forum, Huge events (>10MB) every second or so 
I'm looking into using MIDAS for an experiment that creates one large event
(20MB or more) every second.

Q1: It looks like I should use EQ_FRAGMENTED. Has this feature been in use
recently? Is it known to work/not work?

More specifically, the computer should initiate a 1 second data taking, start to
such the data out of the electronics (which may take a while), change some
experimental parameters, and start over. 

Q2: What's the best way to do this? EQ_PERIODIC? 
I cannot guarantee that the time required to read the hardware has an upper bound.
In a standalone-prog I would simply use a big loop and let the machine execute
it as fast as it can: 1.1s, 1.5s, 1.1s, 1.3s, 2.5s, ..... depending on the HW
deadtimes.
Will this work with EQ_PERIODIC?

(Sorry for these maybe stupid questions, but I have so far only used MIDAS for
externally generated events, with <32kB event size).


Thanks a lot,

Randolf
    Reply  01 Mar 2014, Randolf Pohl, Forum, Huge events (>10MB) every second or so big_event.tgz
Works, and here is how I did it. The attached example is based on the standard MIDAS
example in "src/midas/examples/experiment". 

My somewhat unsorted notes, haven't really tweaked the numbers. But it WORKS.

(1) mlogger writes "last.xml" (hard-coded!) which takes an awful amount of time
    as it writes the complete ODB containing the 10MB bank!
    just outcomment 
       // odb_save("last.xml");
    in mlogger.cxx, function 
    INT tr_start(INT run_number, char *error)
    (line ~3870 in mlogger rev. 5377, .cxx-file included)

(2) frontend.c:
     * the most important declarations are

/* BIG_DATA_BYTES is the data in 1 bank
   BIG_EVENT_SIZE is the event size. It's a bit larger than the bank size
                  because MIDAS needs to add some header bytes, I think
 */

#define BIG_DATA_BYTES  (10*1024*1024)   // 10 MB
#define BIG_EVENT_SIZE  (BIG_DATA_BYTES + 100)

/* maximum event size produced by this frontend */
INT max_event_size = BIG_EVENT_SIZE;

/* maximum event size for fragmented events (EQ_FRAGMENTED) */
INT max_event_size_frag = 5 * BIG_EVENT_SIZE;

/* buffer size to hold 10 events */
INT event_buffer_size = 10 * BIG_EVENT_SIZE;


     * bk_init() can hold at most 32kByte size events! Use bk_init32() instead.

     * complete frontend.c is attached

(3) in an xterm do
    # . setup.sh
    # odbedit -s 41943040
          (first invocation of odbedit must create large enough odb,
           otherwise you'll get "odb full" errors)
(4) odbedit> load big.odb      
    (attached). Essentials are:

    /Experiment/MAX_EVENT_SIZE = 20971520
    /Experiment/Buffer sizes/SYSTEM = 41943040   <- at least 2 events!

    To avoid excessive latecies when starting/stopping a run, do
    /Logger/ODB Dump = no 
    /Logger/Channels/0/Settings/ODB Dump = no 
     
    and create an Equipment Tree to make the mlogger happy

(5) a few more xterms, always ". setup.sh":
    # mlogger_patched (see (1))
    # ./frontend  (attched)

(6) in your odbedit (4) say "start". You should fill your disk rather quickly.
    Reply  21 Jun 2004, Piotr Zolnierczuk, , min(a,b) in mana.c and mlogger.c 
> > When I compile current cvs-head midas, I get errors about undefined function
> > min(). I do not think min() is in the list of standard C functions, so
> > something else should be used instead, like a MIN(a,b) macro. To make life
> > more interesting, in a few places, there is also a variable called "min".
> > Here is the error:
> > 
> > src/mana.c: In function `INT write_event_ascii(FILE*, EVENT_HEADER*, 
> >    ANALYZE_REQUEST*)':
> > src/mana.c:2571: `min' undeclared (first use this function)
> > src/mana.c:2571: (Each undeclared identifier is reported only once for each 
> >    function it appears in.)
> > make: *** [linux/lib/rmana.o] Error 1
> 
> This is really a miracle to me. The min/max macros are defined both in midas.h
> and msystem.h and worked the last ten years or so. However, I agree that macros
> should follow the standard and use capital letters, so I changed that.

The problem is that /usr/include/c++/3.*/bits/stl_algobase.h contains 
#undef min
#undef max

and in C++ with STL one should really use something like this
    std::min<INT>(a,b)


Cheers
  Piotr
Entry  21 Jun 2004, Piotr Zolnierczuk, , FAQ: anonymous cvs access? 
Is the midas CVS server set-up so that I can pull the newest 
version off the CVS server?

What would be my CVSROOT?
pserver:anoncvs@midas.psi.ch:/cvs/midas *this did not work* :)

Piotr

 
Entry  30 Jun 2004, Piotr Zolnierczuk, , mvme167 problems 
Hi,
 I am really puzzled: I am running the very same as far as sources
are concerned (Dec 12, 2003 snapsot) midas frontend (miniexp + camacnul)
on two different machines (and the same trusted private network):

1) one is an ancient Pentium/100 MHz laptop with RedHat Linux 7.3 and 
2) another one is event more ancient MVME167 25MHz running VxWorks 5.4.2

The front end on my Linux PC works just fine, whereas on the MVME167
I get intermittent crashes (most often at the end of the run).
[Correction: the crashes happen, I think, when the frontend wants 
to update the ODB]

The crashes happen in db_set_record routines

Any ideas what might be wrong? 
Except that MVME167 is a piece of ...#@!% 

Piotr
    Reply  30 Jun 2004, Piotr Zolnierczuk, , mvme167 problems 
A followup: I traced back the problem to version 1.9.2.

Version 1.9.1 does not have this problem but 1.9.2 does. 
For now I stick with 1.9.1

Piotr
 
ELOG V3.1.4-2e1708b5