ID |
Date |
Author |
Topic |
Subject |
375
|
14 May 2007 |
Carl Metelko | Forum | Splitting data transfer and control onto different networks | Hi,
thanks for the advice. We do have dual core Xeons so we'll try running
most things on the server. Unless it proves to be a problem we'll run all
MIDAS signals on one network and NFS etc on the other.
I do have one more query about running systems like Konstantin.
What we would like to do is have a 'mirror' server serving multiple
online monitoring machines so that the load on the server is constant nomatter
the demands on the mirror.
Is there a way to set this up? Or would it be best to have a remote analyser
making short (1min) root files shared with the online monitoring? |
374
|
10 May 2007 |
Konstantin Olchanski | Info | RHEL5/SL5 success! | FWIW, I am running latest 32-bit MIDAS on an AM2 dual core AMD machine under 64-bit SL5. Everything
seems to work correctly. K.O.
P.S. For the record, the compiler produces two sets of warnings:
- warning: pointer targets in passing argument 3 of â differ in signedness
- warning: dereferencing type-punned pointer will break strict-aliasing rules
(I do not understand the meaning of the second warning. type-punned pointer, huh?)
K.O. |
373
|
10 May 2007 |
Konstantin Olchanski | Bug Fix | Fix error reporting from cm_transition() | For some time now, error reporting from cm_transition() was broken.
Typical symptom was when starting a run from mhttpd, when a transition error occurred, the run does not
start (good) but the user is presented with a message "Success" in big letters (confusing the user).
Part of the problem was caused by user-written frontends that return an empty error string. Code in
cm_transition() now detects this and shows the numeric value of the error status returned by the frontend.
This is fixed in revision 3681.
The error string "Success" is now returned only when cm_transition() was successful, and other error
reporting inside this function was cleaned up.
K.O. |
372
|
10 May 2007 |
Konstantin Olchanski | Bug Fix | mhttpd: fix broken boolean arrays in "edit on start" | For some time now, boolean arrays did not work correctly in "/experiment/edit on start". This is now fixed
in rev 3680. K.O. |
371
|
09 May 2007 |
Konstantin Olchanski | Forum | Splitting data transfer and control onto different networks | > I'm setting up a system with two networks with the intension of having
> control info (odb, alarm) on the 192.168.0.x
> and the frontend readout on 192.168.1.x
We have some experience with this at TRIUMF - the TWIST experiment we run with the main data
generating frontends on a private network - it is a supported configuration and it works fine.
We ran into one problem after adding some code to the frontends for stopping the run upon detecting
some data errors - stopping runs requires sending RPC transactions to every midas client, so we had to
add static network routes for routing packets between midas nodes on the private network and midas
nodes on the normal network.
> I'm also trying to separate processes onto different machines, is there
> any way to not have mserver,mhttpd and (mlogger,mevt) all run on the same machine?
mserver runs on the machine with the ODB shared memory by definition (think of it as "nfs server").
mhttpd typically runs on the machine with the ODB shared memory and until recently it had no code for
connecting to the mserver. I recently fixed some of it, and now you can run mhttpd in "history mode"
through the mserver. This is useful for offloading the generation of history plots to another cpu or
another machine. In our case, we run the "history mhttpd" on the machine that holds the history files.
mlogger could be made to run remotely via the mserver, but presently it will refuse to do so, as it has
some code that requires direct access to midas shared memory. If data has to be written to a remote
filesystem, the consensus is that it is more efficient to run mserver locally and let the OS handle remote
filesystem access (NFS, etc).
All other midas programs should be able to run remotely via the mserver.
K.O. |
370
|
09 May 2007 |
Stefan Ritt | Forum | Splitting data transfer and control onto different networks | Hi Carl,
so far I did not experience any problems of running odb&alarm on the same link as
the readout, since the data goes usually frontend->backend, and all other messages
from backend->frontend. So before you do something complicated, try it first the
easy way and check if you have problems at all. So far I don't know anybody who
did separate the network interfaces so I have not description for that.
You can however separate processes. The easiest is to buy a multi-core machine. If
you want to use however separate computers, note that receiving events over the
network is not very optimized. So you should run mserver connected to the frontend
, the event builder and mlogger on the same machine. mhttpd can easily live on
another machine, but there is not much CPU consumption from that (unless you don't
plot long history trends). Running mserver, the event builder and mlogger on the
same machine (dual Xenon mainboard) gave me easily 50 MB/sec (actually disk
limited), and not both CPUs were near 100%. If you put any receiving process (like
the event builder or mlogger or the analyzer) on a separate machine, you might see
a bottlened on the event receiving side of maybe 10MB/sec or so (never really
tried recently).
Best regards,
Stefan
> Hi,
> I'm setting up a system with two networks with the intension of having
> control info (odb, alarm) on the 192.168.0.x
> and the frontend readout on 192.168.1.x
>
> Is there any easy way of doing this?
> I'm also trying to separate processes onto different machines, is there
> any way to not have mserver,mhttpd and (mlogger,mevt) all run on the same
> machine?
> Thanks,
> Carl Metelko |
369
|
09 May 2007 |
Carl Metelko | Forum | Splitting data transfer and control onto different networks | Hi,
I'm setting up a system with two networks with the intension of having
control info (odb, alarm) on the 192.168.0.x
and the frontend readout on 192.168.1.x
Is there any easy way of doing this?
I'm also trying to separate processes onto different machines, is there
any way to not have mserver,mhttpd and (mlogger,mevt) all run on the same
machine?
Thanks,
Carl Metelko |
368
|
10 Apr 2007 |
Dan Gastler | Forum | Interrupt code for VME? | Hello,
Is there any example code for using midas for interrupt driven data
collection over VME? I am using a Struck SIS3100 PCI/VME setup to connect to my
VME crate. Thanks,
-Dan |
367
|
09 Apr 2007 |
Konstantin Olchanski | Info | move history, elog and alarm functions into separate files | As approved by Stefan, I moved the history (hs_xxx), alarm (al_xxx) and elog (el_xxx) functions out of
midas.c into separate files. Commited as revision 3665. This change should be transparent to all users.
K.O. |
366
|
03 Apr 2007 |
Stefan Ritt | Bug Fix | SIGABT of "mlogger" and possible fix |
Exaos Lee wrote: | Version: svn 3658
Code: mlogger.c
Problem: After executation of "mlogger", a "SIGABT" appears.
Compiler: GCC 4.1.2, under Ubuntu Linux 7.04 AMD64
Possible fix:
Change the code in "mlogger.c" from
/* append argument "-b" for batch mode without graphics */
rargv[rargc] = (char *) malloc(3);
rargv[rargc++] = "-b";
TApplication theApp("mlogger", &rargc, rargv);
/* free argument memory */
free(rargv[0]);
free(rargv[1]);
free(rargv);
to
/* append argument "-b" for batch mode without graphics */
rargv[rargc] = (char *) malloc(3);
rargv[rargc++] = "-b";
TApplication theApp("mlogger", &rargc, rargv);
/* free argument memory */
free(rargv[0]);
/*free(rargv[1]);*/
free(rargv);
I think, it might be the problem of 'rargv[rargc++]="-b"'. |
Actually the line
rargv[rargc] = (char *) malloc(3);
needs also to be removed, since rargv[1] points to "-b" which is some static memory and does not need any allocation. I committed the change. |
365
|
03 Apr 2007 |
Stefan Ritt | Info | Switch to Visual C++ 2005 under Windows | I had to switch to Visual C++ 2005 under Windows. This required the upgrade of
all project files under \midas\nt\ and fixing a few warnings, since the new
compiler is more picky.
Note that in order to use most C RTL funcitons, you have to define two
preprocessor statements:
#define _CRT_SECURE_NO_DEPRECATE
#define _CRT_NONSTDC_NO_DEPRECATE
either at the beginning of a file (before you include stdio.h), or via the
project property page under C/C++ / Preprocessor / Preprocessor Definitions,
where you also have the WIN32 and the _CONSOLE definitions. I adapted all
project files in the distribution, but for all local projects this has to be
done additionally. |
364
|
02 Apr 2007 |
Exaos Lee | Bug Fix | SIGABT of "mlogger" and possible fix | Version: svn 3658
Code: mlogger.c
Problem: After executation of "mlogger", a "SIGABT" appears.
Compiler: GCC 4.1.2, under Ubuntu Linux 7.04 AMD64
Possible fix:
Change the code in "mlogger.c" from
/* append argument "-b" for batch mode without graphics */
rargv[rargc] = (char *) malloc(3);
rargv[rargc++] = "-b";
TApplication theApp("mlogger", &rargc, rargv);
/* free argument memory */
free(rargv[0]);
free(rargv[1]);
free(rargv);
to
/* append argument "-b" for batch mode without graphics */
rargv[rargc] = (char *) malloc(3);
rargv[rargc++] = "-b";
TApplication theApp("mlogger", &rargc, rargv);
/* free argument memory */
free(rargv[0]);
/*free(rargv[1]);*/
free(rargv);
I think, it might be the problem of 'rargv[rargc++]="-b"'. You may try the following test program:
#include <stdio.h>
#include <malloc.h>
int main(int argc, char** argv)
{
char* pp;
pp = (char *)malloc(sizeof(char)*3);
/* pp = "-b"; */
strcpy(pp,"-b");
printf("PP=%s\n",pp);
free(pp);
return 0;
}
If using "pp=\"-b\"", a SIGABRT appears. |
363
|
16 Mar 2007 |
Konstantin Olchanski | Info | RFC- history system improvements | > Let's improve the midas history system...
After implementing 2 prototypes, one aspect of the new design is starting to firm up enough to write it down (I do so in a mock FAQ format).
Q. I ran an experiment at triumf, returned home and now I have a bunch of midas history files (*.hst) on my laptop. How do I export these history
data to some useful format?
A. Run "mhdump *.hst | import_to_sql.perl" or "mh2ttree -o history.root *.hst" (export to mysql or ROOT TTree respectively). (TBW:
import_to_sql.perl and mh2ttree)
Q. I have all these midas history files (*.hst), how do I look at them with mhttpd?
A. Follow these steps:
1) setup a blank experiment (no frontends, no analyzer, no mlogger), make sure you can run odbedit and mhttpd.
2) put (symlink) the history files into the history (data) directory
3) run "mhdump -t *.hst > tags.cmd"
4) run "odbedit -c @tags.cmd"
5) start mhttpd, go to the "history" page, setup history plots
6) look at history plots as usual
As always, all the cool stuff is happening behind the scenes:
- in step (3) and (4) we create ODB entries for all events and tags in the history files:
/history/tags/2 = "Trigger" <--- declare event 2 "Trigger" (was equipment "Trigger" while we were taking data)
/history/tags/2:Rate = 1 <--- declare tag "Rate" as an array of one element
/history/tags/2:Scalers = 10 <--- declare tag "Scalers" as an array of 10 elements
... and so forth for each event and tag that ever existed in the history files.
When running a live experiment, the /history/tags entries are created by the mlogger.
- in step (5), the history plot setup page reads the names of history events and tags from /history/tags. The existing code for extracting the
names of events and tags from the /equipment tree goes away. The variables part of history plots are saved the same way as now, i.e.
"Trigger:Rate" and "Trigger:Scalers[3]" - existing plot definitions continue working as before.
- in step (6), to plot the variable named "Trigger:Scalers[3]", the mhttpd code again reads /history/tags to find out that "Trigger" corresponds to
event id 2 and "Scalers" is a valid array (of size 10). This is enough to call hs_read() with the correct arguments to read the existing .hst files - the
existing code will even regenerate the .idx and .def history files.
How do existing experiments migrate to the new code? It is all automatic, no user actions needed. For writing history files, there are no changes.
For reading history files, the "new mhttpd" expects to find /history/tags, which will be created automatically by the "new mlogger".
I am presently cleaning up the implementation of this idea in mhttpd and in the mlogger (only those 2 files are affected- 2 functions in mhttpd.c
and 1 function in mlogger.c) and after some testing it will be ready for commiting to midas svn.
The next step would be changes in mlogger.c for recording the history for each variable separately (each variable gets it's own event id). I have
this implemented, but interaction with mhttpd is still in flux and I may want to run the new code at CERN for a few months before I deem it stable
enough for general use.
K.O. |
362
|
15 Mar 2007 |
Stefan Ritt | Info | mhdump: a standalone MIDAS history dump utility | > I hope people find this program useful. If you have any feedback (patches, bug
> reports, requests for improvements), please post them as replies to this forum
> message.
I wouldn't mind putting this into the midas distribution. Put it under utils/, add
an entry to the Makefile, and fix that warning:
mhdump.cxx: In function `int readHstFile(FILE*)':
mhdump.cxx:161: warning: comparison between signed and unsigned integer expressions |
361
|
15 Mar 2007 |
Konstantin Olchanski | Info | mhdump: a standalone MIDAS history dump utility | While working on improvements to the MIDAS history system, I understood the data
format of the MIDAS .hst files and wrote a standalone program to extract data
from them, called mhdump.
mhdump is intended to be easier to use, compared to mhist. By default it reads
and decodes all the data in the given .hst files, with options to limit the
decoding to specified events and tags, and an option to omit the event and tag
names from the output.
mhdump is completely standalone and does not require MIDAS header files and
libraries.
The mhdump source code and a description of the .hst file format are here:
http://daq-plone.triumf.ca/SR/MIDAS/utils/mhdump/
I hope people find this program useful. If you have any feedback (patches, bug
reports, requests for improvements), please post them as replies to this forum
message.
K.O. |
360
|
06 Mar 2007 |
Konstantin Olchanski | Info | commited mhttpd fixes & improvements | I commited the mhttpd fixes and improvements to the history code accumulated while running the ALPHA
experiment at CERN:
- fix crashes and infinite loops while generating history plots (also seen in TWIST)
- permit more than 10 variables per history plot
- let users set their own colours for variables on history plot
- (finally) add gui elements for setting mimimum and maximum values on a plot
- implement special "history" mode. In this mode, the master mhttpd does all the work, except for
generating of history plots, which is done in a separate mhttpd running in history mode, possibly on a
different computer (via ODB variable "/history/url").
I also have improvements to the mhttpd elog code (better formatting of email) and to the "export history
plot as CSV" function, which I will not be commiting: for elog, we switched to the standalone elogd; and
CSV export is still very broken, even with my fixes.
The commited fixes have been in use at CERN since last Summer, but I could have introduced errors
during the merge & commit. I am now using this new code, so any new errors should surface and get
squashed quickly.
K.O. |
359
|
03 Mar 2007 |
Stefan Ritt | Forum | event builder scalability | > It seems that there's no problem running MIDAS with event builder assembling
> data from ~10 front-ends. How about ~100? One possible solution is to have a
> multi-tiered architecture.
>
> The reason I am asking is that we are in the process of designing an Ethernet
> based DAQ system with front-ends running on embedded computers (Linux/ARM
> CPU/Xilinix FPGA) and MIDAS is one of my options as a DAQ framework.
> I am open for advice/suggestions.
The event builder is a standalone application not part of the "midas core". It
receives data from N producers and combines the fragments into events based on
their serial number as a dedicated process. If it would become a bottleneck, it
can simply be redesigned and optimized. I made currently good experience with
multi-threaded applications running on multi-core CPUs. Implementing your
multi-tiered architecture as a multi-threaded event builder, where each of ten
threads receives data from ten front-ends, combines them and passes them to the
"collector thread" would make sense to me. Between the threads you can pass data
with many GB/sec, as compared to an ethernet-based architecture. I currently
implemented the rb_xxx functions inside midas.c which lets you pass data between
threads on a zero-copy basis.
Inside the core functions of midas there is no limitations whatsoever. All
counters etc. are 32-bit, so you can run 2^32 data consumers etc. You will first
hit the OS process limit. What I'm more concerned is your network bandwidth. If
you run 100 front-ends each with more than 1MB/sec, you would hit the 1GBit limit
of your network card. If you put more network interfaces, you will hit the disk
I/O limit which is around 100-200MB/sec even on larger RAID1 disk arrays (unless
you do data compression during event building).
Another limit I see is the run transition. On each start/stop of a run, the
process which wants to start/stop the run has to contact all producers via a TCP
connection. Opening 100 TCP connection will take maybe 10-30 seconds, which is not
very convenient. A multi-threaded approach will help, but this is not (yet)
implemented, maybe you would have to do it yourself.
Another approach would be that you put the event building "in front of midas". All
your front-ends run a specific protocol outside of midas. They send their data to
a collecting process which acts as a single front-end to midas. So in the midas
framework you see only a single front-end, which gets it's data not from hardware,
but from 100 other nodes. This way you can optimize the protocol between your
front-end nodes and the collector process for your application. Run transitions
can be done through multicast UDP messages for example, which will even work with
1000 front-ends. But you have to implement that yourself.
I would start with the first approach: Taking the out-of-the box midas, see how
far I get. If you have access to a normal linux cluster, you can simply run ten
dummy front-ends on each of ten nodes, thus simulating 100 front-ends and see how
far you get. If the event builder is the bottle neck, do an optimization or
redesign. If the run transitions become your bottle neck, switch to method two. In
both ways you can utilize the downstream part of midas, like the logger, the
history system, etc. so you would still gain a lot compared to a design from scratch.
Best regards,
Stefan |
358
|
03 Mar 2007 |
Piotr Zolnierczuk | Forum | event builder scalability | Hi all,
thank you for all responses.
It seems that there's no problem running MIDAS with event builder assembling
data from ~10 front-ends. How about ~100? One possible solution is to have a
multi-tiered architecture.
The reason I am asking is that we are in the process of designing an Ethernet
based DAQ system with front-ends running on embedded computers (Linux/ARM
CPU/Xilinix FPGA) and MIDAS is one of my options as a DAQ framework.
I am open for advice/suggestions.
Thanks again
Piotr |
357
|
02 Mar 2007 |
Kevin Lynch | Forum | event builder scalability | > Hi there:
> I have a question if there's anybody out there running MIDAS with event builder
> that assembles events from more that just a few front ends (say on the order of
> 0x10 or more)?
> Any experiences with scalability?
>
> Cheers
> Piotr
Mulan (which you hopefully remember with great fondness :-) is currently running
around ten frontends, six of which produce data at any rate. If I'm remembering
correctly, the event builder handles about 30-40MB/s. You could probably ping Tim
Gorringe or his current postdoc Volodya Tishenko (tishenko@pa.uky.edu) if you want
more details. Volodya solved a significant number of throughput related
bottlenecks in the year leading up to our 2006 run. |
356
|
27 Feb 2007 |
Stefan Ritt | Forum | event builder scalability | > Our bottle neck is (a) compactPCI backplane reading data from waveform digitizers
> to the frontend CPUs and (b) CPU power on the frontend CPUs to analyzer the waveforms.
I forgot to mention that our front-ends at MEG are 2.8 GHz dual Xenon with Hyperthreading.
This gives "virtual" 4 CPU cores which are really necessary for waveform calibration and
analysis. It makes use of the new multi-threading feature in the midas front-end. I run
actually 7 threads (one VME readout, 4 calibration threads, one encoding thread and the
main thread sending data to the backend. This speeds up data taking by a factor of four
compared to a single thread. So if one plans for waveform analysis in the frontend to
reduce the data, I would recommend a box with dual quad cores. |
|