Back Midas Rome Roody Rootana
  Midas DAQ System, Page 118 of 137  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
IDdown Date Author Topic Subject
  382   07 Jun 2007 Randolf PohlForumcrash when analyzing multiple runs offline
Hello,

I am having a problem with the root-based analyzer. It crashes when I try to 
analyze multiple runs OFFLINE using the "-i run%05d.mid -o result%05d.root -r 
1 2" feature.

I can reproduce the problem with the example experiment which comes with the 
MIDAS distribution:
Running the analyzer ONLINE works fine: One can start and stop runs one after 
the other, roody shows the histograms being reset and then filled again and 
such.

But OFFLINE, the analyzer crashes when trying to analyze the SECOND run in a 
sequence. So
./analyzer -i run%05d.mid -o result%05d.root -r 1 1   works (only run 1)
./analyzer -i run%05d.mid -o result%05d.root -r 1 3   dies on run 2
Output attached (I added printf's to the "init"-modules, but that's irrelevant 
here)


My own analyzer shows the same effect. There I got the impression the segfault 
happens on the first attempt to Fill/Reset/SetName etc. a histogram in the 2nd 
run. But with the midas example it looks like the analyzer finishes filling 
histos even for run 2, but then dies in eor.

Can you reproduce the problem?

I run MIDAS on an Intel Quadcore, 64 bit SuSE Linux 10.2.
pohl@lamb2:~/midas/examples/root> gcc --version
gcc (GCC) 4.1.2 20061115 (prerelease) (SUSE Linux)

(maybe 4.1.2 "PRERELEASE" is the problem? See message ID 344)

I am using midas rev. 3674 (April 19, 2007), but I got the impression there 
has since not been a change relevant to this problem. Please correct me if I 
am wrong, then I would try it with Rev HEAD.
(My version includes already the fix to the x86_64 segfault problem of message 
ID 337)


Best regards,

Randolf
  381   07 Jun 2007 Konstantin OlchanskiSuggestionRFC- ACLs for midas rpc, mserver, mhttpd access
Running MIDAS at CERN is proving more challenging than I expected. The network environement is not 
as benign as I am used to (i.e. at TRIUMF) and our machines are being constantly probed by something/
somebody.

This already caused failures in the mserver (fixed in midas svn) and I would like to resolve this problem 
once and for all. The age of "nice networks" is over.

The case of the mserver and for the midas rpc servers (every midas applications listens for midas rpc 
requests, i.e. run transitions) is simple. The list of machines running midas applications is known ahead 
of time, so we can put them all into a list of permitted machines and deny rpc connections to anybody 
else. I propose we keep this list of permitted mserver clients in "/experiment/security/mserver hosts".

(The already existing "/experiment/security/allowed hosts" mechanism is insufficient: it does not 
prevent the mserver from accepting connections from hostile machines, and talking to them, for 
example giving them the list of available experiments. There is a fair amount of code involved and I do 
not presume to certify any of it as hack-proof or even as crash-proof.)

For mhttpd http:// access control, I thought of using tcp_wrappers, but C-API documentation does not 
exist (I looked), the example code in tcpd.c is way too complicated, editing the ACL /etc/hosts.allow 
unnecessarily requires root privileges and non of it would work on Windows.

So I am favouring a home-made hostname or ip-address filter, similar to /etc/hosts.allow, with ACL 
stored, for example, in "/experiment/security/mhttpd hosts".

Any thoughts?

K.O.
  380   22 May 2007 Stefan RittBug Reportanalyzer_init called by odb_load
> Thanks for the quick reply, Stefan.
> 
> Please don't change anything in the code unless you find it really important.
I guess 
> changing the analyzer_init prototype will break a lot of code out there?
> 
> In fact, I think I do understand this behavior now.
> And even without your suggested fix there is a simple workaround: I add a static 
> variable to my analyzer_init.cxx file, and do something similar to your bFirst
fix.
> 
> In conclusion, commit your fix if it does not harm others. Postpone this
commit to a 
> future new version of midas which breaks a lot of things anyway...
> 
> A last question, for me to understand: Why not call db_open_record in 
> ana_begin_of_run then?

I fully agree with you that db_open_record would better go into ana_begin_of_run
(and
analyzer_init not being called in odb_load), and I fully agree with you that
changing the
code would break many experiments. ;-)

So I guess we leave it as it is right now as you suggested.
  379   22 May 2007 Randolf PohlBug Reportanalyzer_init called by odb_load
Thanks for the quick reply, Stefan.

Please don't change anything in the code unless you find it really important. I guess 
changing the analyzer_init prototype will break a lot of code out there?

In fact, I think I do understand this behavior now.
And even without your suggested fix there is a simple workaround: I add a static 
variable to my analyzer_init.cxx file, and do something similar to your bFirst fix.

In conclusion, commit your fix if it does not harm others. Postpone this commit to a 
future new version of midas which breaks a lot of things anyway...

A last question, for me to understand: Why not call db_open_record in 
ana_begin_of_run then?

Cheers,

Randolf
  378   22 May 2007 Stefan RittBug Reportanalyzer_init called by odb_load
The reason to call analyzer_init in odb_load is the following:

Assume you run the analyzer offline, analyzing many files in series. Then assume
that you have /Experiment/Run Parameters, which is actively used by the analyzer
(like beam settings etc.). In this case you do a db_open_record() to map
/Experiment/Run Parameters to the exp_param C structure. For this mapping to work,
the ODB structure and the C structure have to be exactly the same. Now assume that
you changed your run parameters over time, like you added some comment later. Now
you want to analyzer several runs, some before and some after the modification.
Both sets have a different structure in /Experiment/Run Parameters, which is a
problem, since the compiled analyzer can only have a single C structure. My "poor"
solution was to call analyzer_init after each loading of the ODB from the *.mid
file. The db_create_record() call matches the C structure to the ODB structure by
modifying the ODB structure if necessary. So if you added one parameter later, this
(modified) structure gets loaded by odb_load, but then it gets adjusted in
analyzer_init().

I understand now that this case might not happen so often, and you are more
bothered by the fact that analyzer_init gets called several time. There must
however be a hook for offline analysis that the user code can correct the ODB
structure. So I propose to add a flag to analyzer_init, such as

INT analyzer_init(BOOL bFirst)
{
}

If bFirst equals TRUE, the function got called from mana_init(), if FALSE, it got
called from odb_load. Then you can put code like

INT analyzer_init(BOOL bFirst)
{
   if (bFirst) {
      p = malloc()
      ...
   }
}

If you agree, I will modify the code and commit the change.

- Stefan
  377   22 May 2007 Randolf PohlBug Reportanalyzer_init called by odb_load
Hi,

I wonder why mana.c:odb_load() calls analyzer_init(). This way analyzer_init 
is called TWICE or more times:
first from mana.c:mana_init(), for each invocation of the analyzer, and 
second from mana.c:odb_load(), for each run to be analyzed

Isn't this a bug? It can mess up several things (like mallocs) if you don't 
take the necessary precautions. Other module_init functions are correctly 
called only once, before all runs are analyzed.

I have the feeling, that odb_load should NOT call analyzer_init. Or am I wrong 
(probably, but please explain to me)? Do I have to live with it and make sure 
that my beautiful global initialization in analyzer_init is only done once?
:-)

Cheers,

Randolf

And here is the annotated log using the ROOT example experiment 
(several modules changed/added to print their respective names)

:~/midas/examples/root> ./analyzer -e exa_root -i run%05d.mid -r 1 3
 
analyzer_init        <-- ok

Root server listening on port 9090...
adc_calib_init       <-- ok
adc_summing_init     <-- ok
scaler_init          <-- ok
Running analyzer offline. Stop with "!"
Set run number 1 in ODB
Load ODB from run 1...
analyzer_init        <-- not ok, or is it?

OK
run00001.mid:777  events, 0.00s
Set run number 2 in ODB
Load ODB from run 2...
analyzer_init        <-- not ok, or is it?

OK
run00002.mid:7227  events, 0.03s
Set run number 3 in ODB
Load ODB from run 3...
analyzer_init        <-- not ok, or is it?

OK
run00003.mid:13866  events, 0.06s
adc_calib_exit
adc_summing_exit
scaler_exit

analyzer_exit
  376   21 May 2007 Konstantin OlchanskiInfomhttpd changes to use /History/Tags data
I am slowly commiting the changes to the history code. This installement adds
code to mhttpd to use the /History/Tags data (to be) generated by the mlogger.

In the nutshell, the logger fills /History/Tags to "remember" what events,
variables and tags exist in the history files.

This replaces the old code that attempts to guess the contents of history files
by looking at /Equipment tree.

To ease the transition to the new system, I am leaving all the old code alive
and active in the absense of "/History/Tags" entries.

As soon as one starts using the new mlogger (to be commited), the new tags based
mhttpd code will activate itself.

K.O.
  375   14 May 2007 Carl MetelkoForumSplitting data transfer and control onto different networks
Hi,
   thanks for the advice. We do have dual core Xeons so we'll try running
most things on the server. Unless it proves to be a problem we'll run all
MIDAS signals on one network and NFS etc on the other.

I do have one more query about running systems like Konstantin.
What we would like to do is have a 'mirror' server serving multiple
online monitoring machines so that the load on the server is constant nomatter
the demands on the mirror.

Is there a way to set this up? Or would it be best to have a remote analyser
making short (1min) root files shared with the online monitoring? 
  374   10 May 2007 Konstantin OlchanskiInfoRHEL5/SL5 success!
FWIW, I am running latest 32-bit MIDAS on an AM2 dual core AMD machine under 64-bit SL5. Everything 
seems to work correctly. K.O.

P.S. For the record, the compiler produces two sets of warnings:
- warning: pointer targets in passing argument 3 of â differ in signedness
- warning: dereferencing type-punned pointer will break strict-aliasing rules
(I do not understand the meaning of the second warning. type-punned pointer, huh?)
K.O.
  373   10 May 2007 Konstantin OlchanskiBug FixFix error reporting from cm_transition()
For some time now, error reporting from cm_transition() was broken.

Typical symptom was when starting a run from mhttpd, when a transition error occurred, the run does not 
start (good) but the user is presented with a message "Success" in big letters (confusing the user).

Part of the problem was caused by user-written frontends that return an empty error string. Code in 
cm_transition() now detects this and shows the numeric value of the error status returned by the frontend.

This is fixed in revision 3681.

The error string "Success" is now returned only when cm_transition() was successful, and other error 
reporting inside this function was cleaned up.

K.O.
  372   10 May 2007 Konstantin OlchanskiBug Fixmhttpd: fix broken boolean arrays in "edit on start"
For some time now, boolean arrays did not work correctly in "/experiment/edit on start". This is now fixed 
in rev 3680. K.O.
  371   09 May 2007 Konstantin OlchanskiForumSplitting data transfer and control onto different networks
> I'm setting up a system with two networks with the intension of having
> control info (odb, alarm) on the 192.168.0.x
> and the frontend readout on 192.168.1.x

We have some experience with this at TRIUMF - the TWIST experiment we run with the main data 
generating frontends on a private network - it is a supported configuration and it works fine.

We ran into one problem after adding some code to the frontends for stopping the run upon detecting 
some data errors - stopping runs requires sending RPC transactions to every midas client, so we had to 
add static network routes for routing packets between midas nodes on the private network and midas 
nodes on the normal network.

> I'm also trying to separate processes onto different machines, is there
> any way to not have mserver,mhttpd and (mlogger,mevt) all run on the same machine?

mserver runs on the machine with the ODB shared memory by definition (think of it as "nfs server").

mhttpd typically runs on the machine with the ODB shared memory and until recently it had no code for 
connecting to the mserver. I recently fixed some of it, and now you can run mhttpd in "history mode" 
through the mserver. This is useful for offloading the generation of history plots to another cpu or 
another machine. In our case, we run the "history mhttpd" on the machine that holds the history files.

mlogger could be made to run remotely via the mserver, but presently it will refuse to do so, as it has 
some code that requires direct access to midas shared memory. If data has to be written to a remote 
filesystem, the consensus is that it is more efficient to run mserver locally and let the OS handle remote 
filesystem access (NFS, etc).

All other midas programs should be able to run remotely via the mserver.

K.O.
  370   09 May 2007 Stefan RittForumSplitting data transfer and control onto different networks
Hi Carl,

so far I did not experience any problems of running odb&alarm on the same link as
the readout, since the data goes usually frontend->backend, and all other messages
from backend->frontend. So before you do something complicated, try it first the
easy way and check if you have problems at all. So far I don't know anybody who
did separate the network interfaces so I have not description for that.

You can however separate processes. The easiest is to buy a multi-core machine. If
you want to use however separate computers, note that receiving events over the
network is not very optimized. So you should run mserver connected to the frontend
, the event builder and mlogger on the same machine. mhttpd can easily live on
another machine, but there is not much CPU consumption from that (unless you don't
plot long history trends). Running mserver, the event builder and mlogger on the
same machine (dual Xenon mainboard) gave me easily 50 MB/sec (actually disk
limited), and not both CPUs were near 100%. If you put any receiving process (like
the event builder or mlogger or the analyzer) on a separate machine, you might see
a bottlened on the event receiving side of maybe 10MB/sec or so (never really
tried recently).

Best regards,

  Stefan

> Hi,
>    I'm setting up a system with two networks with the intension of having
> control info (odb, alarm) on the 192.168.0.x
> and the frontend readout on 192.168.1.x
> 
> Is there any easy way of doing this?
> I'm also trying to separate processes onto different machines, is there
> any way to not have mserver,mhttpd and (mlogger,mevt) all run on the same
> machine?
> Thanks,
>        Carl Metelko
  369   09 May 2007 Carl MetelkoForumSplitting data transfer and control onto different networks
Hi,
   I'm setting up a system with two networks with the intension of having
control info (odb, alarm) on the 192.168.0.x
and the frontend readout on 192.168.1.x

Is there any easy way of doing this?
I'm also trying to separate processes onto different machines, is there
any way to not have mserver,mhttpd and (mlogger,mevt) all run on the same
machine?
Thanks,
       Carl Metelko
  368   10 Apr 2007 Dan GastlerForumInterrupt code for VME?
Hello, 
   Is there any example code for using midas for interrupt driven data
collection over VME? I am using a Struck SIS3100 PCI/VME setup to connect to my
VME crate.  Thanks,
  -Dan
  367   09 Apr 2007 Konstantin OlchanskiInfomove history, elog and alarm functions into separate files
As approved by Stefan, I moved the history (hs_xxx), alarm (al_xxx) and elog (el_xxx) functions out of 
midas.c into separate files. Commited as revision 3665. This change should be transparent to all users. 
K.O.
  366   03 Apr 2007 Stefan RittBug FixSIGABT of "mlogger" and possible fix

Exaos Lee wrote:
Version: svn 3658
Code: mlogger.c
Problem: After executation of "mlogger", a "SIGABT" appears.
Compiler: GCC 4.1.2, under Ubuntu Linux 7.04 AMD64
Possible fix:
Change the code in "mlogger.c" from
   /* append argument "-b" for batch mode without graphics */
   rargv[rargc] = (char *) malloc(3);
   rargv[rargc++] = "-b";

   TApplication theApp("mlogger", &rargc, rargv);

   /* free argument memory */
   free(rargv[0]);
   free(rargv[1]);
   free(rargv);
to
   /* append argument "-b" for batch mode without graphics */
   rargv[rargc] = (char *) malloc(3);
   rargv[rargc++] = "-b";

   TApplication theApp("mlogger", &rargc, rargv);

   /* free argument memory */
   free(rargv[0]);
   /*free(rargv[1]);*/
   free(rargv);

I think, it might be the problem of 'rargv[rargc++]="-b"'.


Actually the line
rargv[rargc] = (char *) malloc(3);

needs also to be removed, since rargv[1] points to "-b" which is some static memory and does not need any allocation. I committed the change.
  365   03 Apr 2007 Stefan RittInfoSwitch to Visual C++ 2005 under Windows
I had to switch to Visual C++ 2005 under Windows. This required the upgrade of
all project files under \midas\nt\ and fixing a few warnings, since the new
compiler is more picky. 

Note that in order to use most C RTL funcitons, you have to define two
preprocessor statements:

#define _CRT_SECURE_NO_DEPRECATE
#define _CRT_NONSTDC_NO_DEPRECATE 

either at the beginning of a file (before you include stdio.h), or via the
project property page under C/C++ / Preprocessor / Preprocessor Definitions,
where you also have the WIN32 and the _CONSOLE definitions. I adapted all
project files in the distribution, but for all local projects this has to be
done additionally.
  364   02 Apr 2007 Exaos LeeBug FixSIGABT of "mlogger" and possible fix
Version: svn 3658
Code: mlogger.c
Problem: After executation of "mlogger", a "SIGABT" appears.
Compiler: GCC 4.1.2, under Ubuntu Linux 7.04 AMD64
Possible fix:
Change the code in "mlogger.c" from
   /* append argument "-b" for batch mode without graphics */
   rargv[rargc] = (char *) malloc(3);
   rargv[rargc++] = "-b";

   TApplication theApp("mlogger", &rargc, rargv);

   /* free argument memory */
   free(rargv[0]);
   free(rargv[1]);
   free(rargv);
to
   /* append argument "-b" for batch mode without graphics */
   rargv[rargc] = (char *) malloc(3);
   rargv[rargc++] = "-b";

   TApplication theApp("mlogger", &rargc, rargv);

   /* free argument memory */
   free(rargv[0]);
   /*free(rargv[1]);*/
   free(rargv);

I think, it might be the problem of 'rargv[rargc++]="-b"'. You may try the following test program:
#include <stdio.h>
#include <malloc.h>

int main(int argc, char** argv)
{
        char* pp;
        pp = (char *)malloc(sizeof(char)*3);
        /* pp = "-b"; */
        strcpy(pp,"-b");
        printf("PP=%s\n",pp);
        free(pp);

        return 0;
}
If using "pp=\"-b\"", a SIGABRT appears.
  363   16 Mar 2007 Konstantin OlchanskiInfoRFC- history system improvements
> Let's improve the midas history system...

After implementing 2 prototypes, one aspect of the new design is starting to firm up enough to write it down (I do so in a mock FAQ format).

Q. I ran an experiment at triumf, returned home and now I have a bunch of midas history files (*.hst) on my laptop. How do I export these history 
data to some useful format?
A. Run "mhdump *.hst | import_to_sql.perl" or "mh2ttree -o history.root *.hst" (export to mysql or ROOT TTree respectively). (TBW: 
import_to_sql.perl and mh2ttree)

Q. I have all these midas history files (*.hst), how do I look at them with mhttpd?
A. Follow these steps:
1) setup a blank experiment (no frontends, no analyzer, no mlogger), make sure you can run odbedit and mhttpd.
2) put (symlink) the history files into the history (data) directory
3) run "mhdump -t *.hst > tags.cmd"
4) run "odbedit -c @tags.cmd"
5) start mhttpd, go to the "history" page, setup history plots
6) look at history plots as usual

As always, all the cool stuff is happening behind the scenes:

- in step (3) and (4) we create ODB entries for all events and tags in the history files:
/history/tags/2 = "Trigger"   <--- declare event 2 "Trigger" (was equipment "Trigger" while we were taking data)
/history/tags/2:Rate = 1       <--- declare tag "Rate" as an array of one element
/history/tags/2:Scalers = 10 <--- declare tag "Scalers" as an array of 10 elements
... and so forth for each event and tag that ever existed in the history files.

When running a live experiment, the /history/tags entries are created by the mlogger.

- in step (5), the history plot setup page reads the names of history events and tags from /history/tags. The existing code for extracting the 
names of events and tags from the /equipment tree goes away. The variables part of history plots are saved the same way as now, i.e. 
"Trigger:Rate" and "Trigger:Scalers[3]" - existing plot definitions continue working as before.

- in step (6), to plot the variable named "Trigger:Scalers[3]", the mhttpd code again reads /history/tags to find out that "Trigger" corresponds to 
event id 2 and "Scalers" is a valid array (of size 10). This is enough to call hs_read() with the correct arguments to read the existing .hst files - the 
existing code will even regenerate the .idx and .def history files.

How do existing experiments migrate to the new code? It is all automatic, no user actions needed. For writing history files, there are no changes. 
For reading history files, the "new mhttpd" expects to find /history/tags, which will be created automatically by the "new mlogger".

I am presently cleaning up the implementation of this idea in mhttpd and in the mlogger (only those 2 files are affected- 2 functions in mhttpd.c 
and 1 function in mlogger.c) and after some testing it will be ready for commiting to midas svn.

The next step would be changes in mlogger.c for recording the history for each variable separately (each variable gets it's own event id). I have 
this implemented, but interaction with mhttpd is still in flux and I may want to run the new code at CERN for a few months before I deem it stable 
enough for general use.

K.O.
ELOG V3.1.4-2e1708b5