ID |
Date |
Author |
Topic |
Subject |
369
|
09 May 2007 |
Carl Metelko | Forum | Splitting data transfer and control onto different networks | Hi,
I'm setting up a system with two networks with the intension of having
control info (odb, alarm) on the 192.168.0.x
and the frontend readout on 192.168.1.x
Is there any easy way of doing this?
I'm also trying to separate processes onto different machines, is there
any way to not have mserver,mhttpd and (mlogger,mevt) all run on the same
machine?
Thanks,
Carl Metelko |
368
|
10 Apr 2007 |
Dan Gastler | Forum | Interrupt code for VME? | Hello,
Is there any example code for using midas for interrupt driven data
collection over VME? I am using a Struck SIS3100 PCI/VME setup to connect to my
VME crate. Thanks,
-Dan |
367
|
09 Apr 2007 |
Konstantin Olchanski | Info | move history, elog and alarm functions into separate files | As approved by Stefan, I moved the history (hs_xxx), alarm (al_xxx) and elog (el_xxx) functions out of
midas.c into separate files. Commited as revision 3665. This change should be transparent to all users.
K.O. |
366
|
03 Apr 2007 |
Stefan Ritt | Bug Fix | SIGABT of "mlogger" and possible fix |
Exaos Lee wrote: | Version: svn 3658
Code: mlogger.c
Problem: After executation of "mlogger", a "SIGABT" appears.
Compiler: GCC 4.1.2, under Ubuntu Linux 7.04 AMD64
Possible fix:
Change the code in "mlogger.c" from
/* append argument "-b" for batch mode without graphics */
rargv[rargc] = (char *) malloc(3);
rargv[rargc++] = "-b";
TApplication theApp("mlogger", &rargc, rargv);
/* free argument memory */
free(rargv[0]);
free(rargv[1]);
free(rargv);
to
/* append argument "-b" for batch mode without graphics */
rargv[rargc] = (char *) malloc(3);
rargv[rargc++] = "-b";
TApplication theApp("mlogger", &rargc, rargv);
/* free argument memory */
free(rargv[0]);
/*free(rargv[1]);*/
free(rargv);
I think, it might be the problem of 'rargv[rargc++]="-b"'. |
Actually the line
rargv[rargc] = (char *) malloc(3);
needs also to be removed, since rargv[1] points to "-b" which is some static memory and does not need any allocation. I committed the change. |
365
|
03 Apr 2007 |
Stefan Ritt | Info | Switch to Visual C++ 2005 under Windows | I had to switch to Visual C++ 2005 under Windows. This required the upgrade of
all project files under \midas\nt\ and fixing a few warnings, since the new
compiler is more picky.
Note that in order to use most C RTL funcitons, you have to define two
preprocessor statements:
#define _CRT_SECURE_NO_DEPRECATE
#define _CRT_NONSTDC_NO_DEPRECATE
either at the beginning of a file (before you include stdio.h), or via the
project property page under C/C++ / Preprocessor / Preprocessor Definitions,
where you also have the WIN32 and the _CONSOLE definitions. I adapted all
project files in the distribution, but for all local projects this has to be
done additionally. |
364
|
02 Apr 2007 |
Exaos Lee | Bug Fix | SIGABT of "mlogger" and possible fix | Version: svn 3658
Code: mlogger.c
Problem: After executation of "mlogger", a "SIGABT" appears.
Compiler: GCC 4.1.2, under Ubuntu Linux 7.04 AMD64
Possible fix:
Change the code in "mlogger.c" from
/* append argument "-b" for batch mode without graphics */
rargv[rargc] = (char *) malloc(3);
rargv[rargc++] = "-b";
TApplication theApp("mlogger", &rargc, rargv);
/* free argument memory */
free(rargv[0]);
free(rargv[1]);
free(rargv);
to
/* append argument "-b" for batch mode without graphics */
rargv[rargc] = (char *) malloc(3);
rargv[rargc++] = "-b";
TApplication theApp("mlogger", &rargc, rargv);
/* free argument memory */
free(rargv[0]);
/*free(rargv[1]);*/
free(rargv);
I think, it might be the problem of 'rargv[rargc++]="-b"'. You may try the following test program:
#include <stdio.h>
#include <malloc.h>
int main(int argc, char** argv)
{
char* pp;
pp = (char *)malloc(sizeof(char)*3);
/* pp = "-b"; */
strcpy(pp,"-b");
printf("PP=%s\n",pp);
free(pp);
return 0;
}
If using "pp=\"-b\"", a SIGABRT appears. |
363
|
16 Mar 2007 |
Konstantin Olchanski | Info | RFC- history system improvements | > Let's improve the midas history system...
After implementing 2 prototypes, one aspect of the new design is starting to firm up enough to write it down (I do so in a mock FAQ format).
Q. I ran an experiment at triumf, returned home and now I have a bunch of midas history files (*.hst) on my laptop. How do I export these history
data to some useful format?
A. Run "mhdump *.hst | import_to_sql.perl" or "mh2ttree -o history.root *.hst" (export to mysql or ROOT TTree respectively). (TBW:
import_to_sql.perl and mh2ttree)
Q. I have all these midas history files (*.hst), how do I look at them with mhttpd?
A. Follow these steps:
1) setup a blank experiment (no frontends, no analyzer, no mlogger), make sure you can run odbedit and mhttpd.
2) put (symlink) the history files into the history (data) directory
3) run "mhdump -t *.hst > tags.cmd"
4) run "odbedit -c @tags.cmd"
5) start mhttpd, go to the "history" page, setup history plots
6) look at history plots as usual
As always, all the cool stuff is happening behind the scenes:
- in step (3) and (4) we create ODB entries for all events and tags in the history files:
/history/tags/2 = "Trigger" <--- declare event 2 "Trigger" (was equipment "Trigger" while we were taking data)
/history/tags/2:Rate = 1 <--- declare tag "Rate" as an array of one element
/history/tags/2:Scalers = 10 <--- declare tag "Scalers" as an array of 10 elements
... and so forth for each event and tag that ever existed in the history files.
When running a live experiment, the /history/tags entries are created by the mlogger.
- in step (5), the history plot setup page reads the names of history events and tags from /history/tags. The existing code for extracting the
names of events and tags from the /equipment tree goes away. The variables part of history plots are saved the same way as now, i.e.
"Trigger:Rate" and "Trigger:Scalers[3]" - existing plot definitions continue working as before.
- in step (6), to plot the variable named "Trigger:Scalers[3]", the mhttpd code again reads /history/tags to find out that "Trigger" corresponds to
event id 2 and "Scalers" is a valid array (of size 10). This is enough to call hs_read() with the correct arguments to read the existing .hst files - the
existing code will even regenerate the .idx and .def history files.
How do existing experiments migrate to the new code? It is all automatic, no user actions needed. For writing history files, there are no changes.
For reading history files, the "new mhttpd" expects to find /history/tags, which will be created automatically by the "new mlogger".
I am presently cleaning up the implementation of this idea in mhttpd and in the mlogger (only those 2 files are affected- 2 functions in mhttpd.c
and 1 function in mlogger.c) and after some testing it will be ready for commiting to midas svn.
The next step would be changes in mlogger.c for recording the history for each variable separately (each variable gets it's own event id). I have
this implemented, but interaction with mhttpd is still in flux and I may want to run the new code at CERN for a few months before I deem it stable
enough for general use.
K.O. |
362
|
15 Mar 2007 |
Stefan Ritt | Info | mhdump: a standalone MIDAS history dump utility | > I hope people find this program useful. If you have any feedback (patches, bug
> reports, requests for improvements), please post them as replies to this forum
> message.
I wouldn't mind putting this into the midas distribution. Put it under utils/, add
an entry to the Makefile, and fix that warning:
mhdump.cxx: In function `int readHstFile(FILE*)':
mhdump.cxx:161: warning: comparison between signed and unsigned integer expressions |
361
|
15 Mar 2007 |
Konstantin Olchanski | Info | mhdump: a standalone MIDAS history dump utility | While working on improvements to the MIDAS history system, I understood the data
format of the MIDAS .hst files and wrote a standalone program to extract data
from them, called mhdump.
mhdump is intended to be easier to use, compared to mhist. By default it reads
and decodes all the data in the given .hst files, with options to limit the
decoding to specified events and tags, and an option to omit the event and tag
names from the output.
mhdump is completely standalone and does not require MIDAS header files and
libraries.
The mhdump source code and a description of the .hst file format are here:
http://daq-plone.triumf.ca/SR/MIDAS/utils/mhdump/
I hope people find this program useful. If you have any feedback (patches, bug
reports, requests for improvements), please post them as replies to this forum
message.
K.O. |
360
|
06 Mar 2007 |
Konstantin Olchanski | Info | commited mhttpd fixes & improvements | I commited the mhttpd fixes and improvements to the history code accumulated while running the ALPHA
experiment at CERN:
- fix crashes and infinite loops while generating history plots (also seen in TWIST)
- permit more than 10 variables per history plot
- let users set their own colours for variables on history plot
- (finally) add gui elements for setting mimimum and maximum values on a plot
- implement special "history" mode. In this mode, the master mhttpd does all the work, except for
generating of history plots, which is done in a separate mhttpd running in history mode, possibly on a
different computer (via ODB variable "/history/url").
I also have improvements to the mhttpd elog code (better formatting of email) and to the "export history
plot as CSV" function, which I will not be commiting: for elog, we switched to the standalone elogd; and
CSV export is still very broken, even with my fixes.
The commited fixes have been in use at CERN since last Summer, but I could have introduced errors
during the merge & commit. I am now using this new code, so any new errors should surface and get
squashed quickly.
K.O. |
359
|
03 Mar 2007 |
Stefan Ritt | Forum | event builder scalability | > It seems that there's no problem running MIDAS with event builder assembling
> data from ~10 front-ends. How about ~100? One possible solution is to have a
> multi-tiered architecture.
>
> The reason I am asking is that we are in the process of designing an Ethernet
> based DAQ system with front-ends running on embedded computers (Linux/ARM
> CPU/Xilinix FPGA) and MIDAS is one of my options as a DAQ framework.
> I am open for advice/suggestions.
The event builder is a standalone application not part of the "midas core". It
receives data from N producers and combines the fragments into events based on
their serial number as a dedicated process. If it would become a bottleneck, it
can simply be redesigned and optimized. I made currently good experience with
multi-threaded applications running on multi-core CPUs. Implementing your
multi-tiered architecture as a multi-threaded event builder, where each of ten
threads receives data from ten front-ends, combines them and passes them to the
"collector thread" would make sense to me. Between the threads you can pass data
with many GB/sec, as compared to an ethernet-based architecture. I currently
implemented the rb_xxx functions inside midas.c which lets you pass data between
threads on a zero-copy basis.
Inside the core functions of midas there is no limitations whatsoever. All
counters etc. are 32-bit, so you can run 2^32 data consumers etc. You will first
hit the OS process limit. What I'm more concerned is your network bandwidth. If
you run 100 front-ends each with more than 1MB/sec, you would hit the 1GBit limit
of your network card. If you put more network interfaces, you will hit the disk
I/O limit which is around 100-200MB/sec even on larger RAID1 disk arrays (unless
you do data compression during event building).
Another limit I see is the run transition. On each start/stop of a run, the
process which wants to start/stop the run has to contact all producers via a TCP
connection. Opening 100 TCP connection will take maybe 10-30 seconds, which is not
very convenient. A multi-threaded approach will help, but this is not (yet)
implemented, maybe you would have to do it yourself.
Another approach would be that you put the event building "in front of midas". All
your front-ends run a specific protocol outside of midas. They send their data to
a collecting process which acts as a single front-end to midas. So in the midas
framework you see only a single front-end, which gets it's data not from hardware,
but from 100 other nodes. This way you can optimize the protocol between your
front-end nodes and the collector process for your application. Run transitions
can be done through multicast UDP messages for example, which will even work with
1000 front-ends. But you have to implement that yourself.
I would start with the first approach: Taking the out-of-the box midas, see how
far I get. If you have access to a normal linux cluster, you can simply run ten
dummy front-ends on each of ten nodes, thus simulating 100 front-ends and see how
far you get. If the event builder is the bottle neck, do an optimization or
redesign. If the run transitions become your bottle neck, switch to method two. In
both ways you can utilize the downstream part of midas, like the logger, the
history system, etc. so you would still gain a lot compared to a design from scratch.
Best regards,
Stefan |
358
|
03 Mar 2007 |
Piotr Zolnierczuk | Forum | event builder scalability | Hi all,
thank you for all responses.
It seems that there's no problem running MIDAS with event builder assembling
data from ~10 front-ends. How about ~100? One possible solution is to have a
multi-tiered architecture.
The reason I am asking is that we are in the process of designing an Ethernet
based DAQ system with front-ends running on embedded computers (Linux/ARM
CPU/Xilinix FPGA) and MIDAS is one of my options as a DAQ framework.
I am open for advice/suggestions.
Thanks again
Piotr |
357
|
02 Mar 2007 |
Kevin Lynch | Forum | event builder scalability | > Hi there:
> I have a question if there's anybody out there running MIDAS with event builder
> that assembles events from more that just a few front ends (say on the order of
> 0x10 or more)?
> Any experiences with scalability?
>
> Cheers
> Piotr
Mulan (which you hopefully remember with great fondness :-) is currently running
around ten frontends, six of which produce data at any rate. If I'm remembering
correctly, the event builder handles about 30-40MB/s. You could probably ping Tim
Gorringe or his current postdoc Volodya Tishenko (tishenko@pa.uky.edu) if you want
more details. Volodya solved a significant number of throughput related
bottlenecks in the year leading up to our 2006 run. |
356
|
27 Feb 2007 |
Stefan Ritt | Forum | event builder scalability | > Our bottle neck is (a) compactPCI backplane reading data from waveform digitizers
> to the frontend CPUs and (b) CPU power on the frontend CPUs to analyzer the waveforms.
I forgot to mention that our front-ends at MEG are 2.8 GHz dual Xenon with Hyperthreading.
This gives "virtual" 4 CPU cores which are really necessary for waveform calibration and
analysis. It makes use of the new multi-threading feature in the midas front-end. I run
actually 7 threads (one VME readout, 4 calibration threads, one encoding thread and the
main thread sending data to the backend. This speeds up data taking by a factor of four
compared to a single thread. So if one plans for waveform analysis in the frontend to
reduce the data, I would recommend a box with dual quad cores. |
355
|
27 Feb 2007 |
John M O'Donnell | Forum | event builder scalability | At Los Alamos, we have 15+1 frontends - the 15 between them read about 2 or 3
TB/hour and reduce it to 1 to 5 GB/hour which is then sent to the mevb on a 17th
computer. The 16th frontend handles deadtime issues and scalers (small data rate).
frontends are 1GHz pentium 3, and backend is 2.8GHz dual CPU with hyperthreading.
Interconnect is 100Mb ethernet from frontends to switch, and 1Gb ethernet from
switch to backend.
Our bottle neck is (a) compactPCI backplane reading data from waveform digitizers
to the frontend CPUs and (b) CPU power on the frontend CPUs to analyzer the waveforms.
John |
354
|
27 Feb 2007 |
Stefan Ritt | Forum | event builder scalability | > Hi there:
> I have a question if there's anybody out there running MIDAS with event builder
> that assembles events from more that just a few front ends (say on the order of
> 0x10 or more)?
> Any experiences with scalability?
At the MEG experiment at PSI we run with 5 front-ends (later 8), each running at
about 10 MB/sec. This gives an overall rate of 50MB/sec without any problem. The
CPU load on the backend (2.6 GHz dual Xenon) is 30% for the event builder and 26%
for the logger. The DANCE experiment at Los Alamos runs 17 front-ends if I'm not
mistaken (John?). |
353
|
27 Feb 2007 |
Piotr Zolnierczuk | Forum | event builder scalability | Hi there:
I have a question if there's anybody out there running MIDAS with event builder
that assembles events from more that just a few front ends (say on the order of
0x10 or more)?
Any experiences with scalability?
Cheers
Piotr |
352
|
26 Feb 2007 |
Stefan Ritt | Info | Fragmented polled events | Fragmented polled events have been implemented in SVN revision 3625.
Fragmentation is a method of breaking down large (>MB) events into smaller
pieces and send them through the shared memory buffers, reassembling them at the
output. In the past this was only possible for periodic events (such as large
histograms read out once every few seconds), but now this is also possible for
polled events. |
351
|
26 Feb 2007 |
Stefan Ritt | Info | Usage of event channel for improved throughput | Starting from SVN revision 3642, sending events from the front-end has been revised.
Since long time ago, there is a special TCP socket established between any front-end and the mserver which can be used to bypass the midas RPC layer completely and purely send events. There was a #define USE_EVENT_CHANNEL but to my knowledge nobody used it.
While optimizing data throughput for the MEG experiment, I revisited this mechanism and got it finally working. Here are some benchmark tests made with the produce program on two dual-CPU machines running on Gigabit Ethernet:
Using normal RPC socket:
event size speed [MB/sec] CPU usage front-end CPU usage server
==================================================================
40 3 22 100
1000 44 25 100
100000 101 14 50
Using new event socket:
event size speed [MB/sec] CPU usage front-end CPU usage server
==================================================================
40 12 100 34
1000 99 58 59
100000 101 14 43
As can be seen, the CPU load on the server drops significantly for smaller events since the processing time per event is reduced. If the transfer was limited by the server, the throughput goes up significantly. For large events the bottleneck on the server side is the memcpy of events, so no big improvement is visible. The saved CPU time however can be used to analyze more events for example.
The event socket is now enabled by default in the front-end by setting
rpc_mode = 1
in mfe.c and should be checked carefully in various experiments. There is a small chance that events get stuck in the buffer cache on the server side at the end of the run, in which case they would show up as the first events of the next run. I know that this problem happened in some experiment before, but that must have been unrelated to the rpc_mode. So please check again and report any problem with the new rpc_mode. |
350
|
26 Feb 2007 |
Stefan Ritt | Info | RFC- support for writing to removable hard disk storage | In the MEG experiment, we simply installed 100TB of RAID disks and don't need to change anything
But seriously, you are right that such a system might be beneficial. I propose to extend the current logger code to switch disks. In the current tr_start() funciton in mlogger, the code checks for "subdir_format" to create separate subdirectories like once per week. One could extend this code in the following way:
- Add an array of strings and name it "Path", such as
/dev/sda1/datadir/
/dev/sdb1/datadir/
- On each stop of the run, check if the current disk has enough space for one more run. Take either the "Byte limit" of that channel, or the actual size of the last run and multiply it by two or so. If the disk is "almost full", switch to the next array element in "Path". Append the file name, such as "/dev/sda1/datadir/run1234.mid" and put this into "Current filename" as a feedback for the user. Now write to the new disk/file.
- Add as string like "Execute on switch", which gets called after you switched to the next disk. This shell script can then handle the un-mounting of the full disk, notify the user etc. This is similar to the "/Programs/Execute on start run" in the ODB, but it gets only called if you switch the disk. |
|