Back Midas Rome Roody Rootana
  Midas DAQ System, Page 77 of 146  Not logged in ELOG logo
ID Date Author Topic Subjectdown
  350   26 Feb 2007 Stefan RittInfoRFC- support for writing to removable hard disk storage
In the MEG experiment, we simply installed 100TB of RAID disks and don't need to change anything Wink

But seriously, you are right that such a system might be beneficial. I propose to extend the current logger code to switch disks. In the current tr_start() funciton in mlogger, the code checks for "subdir_format" to create separate subdirectories like once per week. One could extend this code in the following way:

- Add an array of strings and name it "Path", such as

/dev/sda1/datadir/
/dev/sdb1/datadir/

- On each stop of the run, check if the current disk has enough space for one more run. Take either the "Byte limit" of that channel, or the actual size of the last run and multiply it by two or so. If the disk is "almost full", switch to the next array element in "Path". Append the file name, such as "/dev/sda1/datadir/run1234.mid" and put this into "Current filename" as a feedback for the user. Now write to the new disk/file.

- Add as string like "Execute on switch", which gets called after you switched to the next disk. This shell script can then handle the un-mounting of the full disk, notify the user etc. This is similar to the "/Programs/Execute on start run" in the ODB, but it gets only called if you switch the disk.
  347   23 Feb 2007 Konstantin OlchanskiInfoRFC- history system improvements
While running the ALPHA experiment at CERN, we stressed and broke the MIDAS history system. We 
generated about 0.5 GB of history data per day, and this killed the performance of the history plot 
system in mhttpd - we had to wait for *minutes* to look at any plots of any variables.

One way to address this problem could be by changing the way ALPHA slow controls data is collected.

Another way to address this problem could be by improving the midas history system by removing 
some of the existing limitations and inefficiencies, enabling it to handle the ever increasing data 
volumes we keep throwing at it.

I feel the second approach (improving midas) is more useful in general and it appears that big 
improvements can be made by small modifications of existing code. No rewrites of midas are required. 
Read on.

Issue 1: in the mlogger, history is recorded with fairly coarse granularity.

For an equipment, if any varible changes, *all* variables for that equipment are written into the history 
file.

Historically, this worked fairly well for experiments with low data rates (a few history changes per 
minute) and with variables equally distributed between different equipments. But even for a modest 
sized experiment like TRIUMF-E614-TWIST, recording many variables when only one has changed has 
been a visible inefficiency. Current experiments wish to record more history data more frequently, but 
even with latest and greatest hardware, in the case of ALPHA, this inefficiency has become a 
performance killer.

One could solve this problem by refactoring the data (one variable per equipment/one equipment per 
variable). I find this approach inelegant and contrary to the "midas way" (whatever that is).

An alternative would be to change the mlogger to record history with per-variable granularity. When 
one variable changes, only that variable is recorded. Preliminary examination of the existing code 
indicates that history writing in the mlogger is already structured in a way that makes it easy to 
implement, while the history reading code does not seem to need any changes at all.

Issue 2: all history data is recorded into a single file.

Again, this has worked well historically. In fact, until not so long ago, it was the only sane way to record 
history data because operating systems could not efficiently write data into multiple files at the same 
time. Insifficient data buffering, suboptimal storage allocation strategies - all leading to bad 
performance. Latest Linux kernels have largely resolved all such issues.

The present problem arises when recording large amounts of history data (say 100 variables) and then 
making a history plot of 1 variable. Because data for the one variable of interest is spread across the 
whole file, effectively, the whole file has to be read into memory, data for 1 variable collected and data 
for the other 99 variables skipped.

In this case, a speed up by a factor of 100 could be obtained by recording (say) one variable per history 
file. (Yes, the history code does use "lseek", but the seek granularity of modern disks is very coarse and 
in my tests, reading the whole file (streaming) is almost faster than seeking through it).

One has to be very careful when looking at these numbers and running benchmarks. Modern computers 
with fast disks and large RAM performs very well no matter how history data is stored and organized. 
Performance problems surface only under the load when running the production system, when the 
disks are busy recording the main data stream and all RAM is consumed by user applications doing 
data analysis.

The obvious solution to this problem is to record each variable into a separate data file. This will 
require modifications to the history writing code in the mlogger and to the history reading code in 
mhttpd, mhist & co.

An extra challenge in this tast is to minimize changes to the existing code and to keep compatibility 
with the existing data files - new code should be able to read existing data files.

I propose to organize data into subdirectores:
history/equipmentNNN/variableVVV/YYMMDD.hst

This scheme does two good things for the history plotting in mhttpd:

1) note that mhttpd always plots one variable at a time, and the variables are addressed by equipment 
(int) and variable name (string) (plus the array index). In the proposed scheme, the code would know 
exactly which history file to open to get the data, no scanning of directories or seeking inside the 
history file.

2) when setting up mhttpd history plots, the code can easily see what equipment and variables exist 
and *ever existed*. The present code only examines the latest history file and cannot see variables that 
have been deleted (or not yet written into the existing file). For example, one cannot see variables that 
existed in the 2005 history but were removed (or renamed) in 2006. (Yes, it can be done by an expert 
using mhist to examine the 2005 history files and odbedit to manually setup the history plots).

Over the next few weeks, I will proceed with implementing these two improvements: (1) mlogger write 
history with per-variable granularity; (2) history file split into one-file-per-variable. If my initial 
assessment is correct and the changes indeed are small, contained, non-intrusive and compatible with 
existing history files, I will submit them for inclusion into mainline midas.

K.O.
  349   26 Feb 2007 Stefan RittInfoRFC- history system improvements
I agree to what you propose. I'm pretty sure you are right in getting a significant improvement in readout speed
of the history system. So far there was no big request for improving the history system, since the performance in
the experiments I was involved in was good. In MEG for example, we have ~20MB of history data per day, and all
plots even going back some months can be made in a couple of seconds. Have a look for example at

http://midas.psi.ch/megon00/HS/PCS/Pressures.gif?hscale=1843200&hoffset=-5068800

This plot stretches over two weeks and involves ~500 MB of history data, and is prepared in a couple of seconds.
The key question here is how big the disk cache of the OS is. The above plot does not read all 500 MB, but skips
many data points in order to obtain ~1000 data points (one per pixel) for the requested period. To find these
data points, it reads and scans the history index files (yymmdd.idx), which are only a few percent of the
yymmdd.hst data files. The index file contains only the time stamp, the event id and the location of the event in
the *.hst file. Scanning the index file is as efficient as scanning a history file with a single variable. Now
comes the access of the history file. For ~1000 data points, 1000 locations have to be read. This requires
reading in the FAT table for the history file and accessing the sector clusters containing the data. In worst
case one has to read 1000 clusters. With a cluster size of 2kB this will be 2MB of data, something which can be
read very quickly. On the MEG system I observe that the first history plot takes about 5 seconds, while all
consecutive plots take about 1 second. This indicates that the FAT information is cached by the OS. This depends
of course as you indicated correctly on how much memory is available for disk caching, how many processes are
running etc. and will finally determine how fast your history access will be.

So if you implement your proposed new scheme, please consider the following:

- Scanning a single variable file is about the same as scanning the current index file. You save however the
access to the data file. If you plot several variables together, you have to access several "single variable
files", so your access time scales with the number of variables. In the current system, it's likely that
different variables from the same event are located in the same cluster. So you have to read the history file
once for each variable, but after the first variable the sectors of interest are very likely cached by the OS. So
I would estimate that the break-even point is about 2-3 variables. I mean if you read more that three variables,
your proposed method might get slower than the current one. This is of course not the case if there are very many
events in the history file. In that case the index file might be much bigger, since it gets a new entry if *any*
variable in an event changes. If all index file together are bigger than you disk cache, the system will become
slow (and I guess that's what you see). In MEG, the index file is about 1MB per day, so a few weeks fit easily
into the disk cache.

- In order not to get too much data, the history system needs fine tuning. Each slow control system class driver
as an "update threshold", which is used to determine if a variable has "changed". For some noisy channels, it
might be worth to set the threshold at 3 sigma of the noise level (RMS). This can reduce your history data
dramatically. For some equipment, you even might consider to define a minimum update period. This is done via
"/Equipment/<name>/Common/Log history". If that variable is set to 10, the time between two consecutive history
records is at least 10 seconds. For some temperatures for example it might make sense to set this even to one
minute or so, depending on how fast your temperatures change.

- If you implement a per-variable history, you probably have to use the per-event hot link in the ODB. Otherwise
you would exceed the number of hot links MAX_OPEN_RECORDS which is currently 256. If you then get a hot link
update, you have to check manually which variable(s) have changed in log_history() in mlogger.c

- Before you actually go and implement the full system, I would write some small test code to "simulate" the new
scheme. Write some dummy files with the full data you expect in the ALPHA experiment and see what the improvement
is under realistic conditions. Only if you see a big improvement it's worth to implement the full code. Test this
on various machine to get a better overview. Maybe it's worth testing different file systems and cluster sizes as
well.

- If there is an improvement, I'm more than happy to replace the current history code in midas. It might however
not be clean to have a heterogeneous history system, where some files are in the old format and some in the new.
It might be better to write a little conversion routine which converts the old format into the new one, even
omitting records where single variables did not change. This conversion could be even put into the standard
mlogger code and is executed automatically if the logger is started first and finds some old data files.

Even if the speed improvement is not so big, one will certainly win a lot on disk file size (like if only one
variable out of 100 changes). This will probably make it worth to implement anyhow.
  363   16 Mar 2007 Konstantin OlchanskiInfoRFC- history system improvements
> Let's improve the midas history system...

After implementing 2 prototypes, one aspect of the new design is starting to firm up enough to write it down (I do so in a mock FAQ format).

Q. I ran an experiment at triumf, returned home and now I have a bunch of midas history files (*.hst) on my laptop. How do I export these history 
data to some useful format?
A. Run "mhdump *.hst | import_to_sql.perl" or "mh2ttree -o history.root *.hst" (export to mysql or ROOT TTree respectively). (TBW: 
import_to_sql.perl and mh2ttree)

Q. I have all these midas history files (*.hst), how do I look at them with mhttpd?
A. Follow these steps:
1) setup a blank experiment (no frontends, no analyzer, no mlogger), make sure you can run odbedit and mhttpd.
2) put (symlink) the history files into the history (data) directory
3) run "mhdump -t *.hst > tags.cmd"
4) run "odbedit -c @tags.cmd"
5) start mhttpd, go to the "history" page, setup history plots
6) look at history plots as usual

As always, all the cool stuff is happening behind the scenes:

- in step (3) and (4) we create ODB entries for all events and tags in the history files:
/history/tags/2 = "Trigger"   <--- declare event 2 "Trigger" (was equipment "Trigger" while we were taking data)
/history/tags/2:Rate = 1       <--- declare tag "Rate" as an array of one element
/history/tags/2:Scalers = 10 <--- declare tag "Scalers" as an array of 10 elements
... and so forth for each event and tag that ever existed in the history files.

When running a live experiment, the /history/tags entries are created by the mlogger.

- in step (5), the history plot setup page reads the names of history events and tags from /history/tags. The existing code for extracting the 
names of events and tags from the /equipment tree goes away. The variables part of history plots are saved the same way as now, i.e. 
"Trigger:Rate" and "Trigger:Scalers[3]" - existing plot definitions continue working as before.

- in step (6), to plot the variable named "Trigger:Scalers[3]", the mhttpd code again reads /history/tags to find out that "Trigger" corresponds to 
event id 2 and "Scalers" is a valid array (of size 10). This is enough to call hs_read() with the correct arguments to read the existing .hst files - the 
existing code will even regenerate the .idx and .def history files.

How do existing experiments migrate to the new code? It is all automatic, no user actions needed. For writing history files, there are no changes. 
For reading history files, the "new mhttpd" expects to find /history/tags, which will be created automatically by the "new mlogger".

I am presently cleaning up the implementation of this idea in mhttpd and in the mlogger (only those 2 files are affected- 2 functions in mhttpd.c 
and 1 function in mlogger.c) and after some testing it will be ready for commiting to midas svn.

The next step would be changes in mlogger.c for recording the history for each variable separately (each variable gets it's own event id). I have 
this implemented, but interaction with mhttpd is still in flux and I may want to run the new code at CERN for a few months before I deem it stable 
enough for general use.

K.O.
  381   07 Jun 2007 Konstantin OlchanskiSuggestionRFC- ACLs for midas rpc, mserver, mhttpd access
Running MIDAS at CERN is proving more challenging than I expected. The network environement is not 
as benign as I am used to (i.e. at TRIUMF) and our machines are being constantly probed by something/
somebody.

This already caused failures in the mserver (fixed in midas svn) and I would like to resolve this problem 
once and for all. The age of "nice networks" is over.

The case of the mserver and for the midas rpc servers (every midas applications listens for midas rpc 
requests, i.e. run transitions) is simple. The list of machines running midas applications is known ahead 
of time, so we can put them all into a list of permitted machines and deny rpc connections to anybody 
else. I propose we keep this list of permitted mserver clients in "/experiment/security/mserver hosts".

(The already existing "/experiment/security/allowed hosts" mechanism is insufficient: it does not 
prevent the mserver from accepting connections from hostile machines, and talking to them, for 
example giving them the list of available experiments. There is a fair amount of code involved and I do 
not presume to certify any of it as hack-proof or even as crash-proof.)

For mhttpd http:// access control, I thought of using tcp_wrappers, but C-API documentation does not 
exist (I looked), the example code in tcpd.c is way too complicated, editing the ACL /etc/hosts.allow 
unnecessarily requires root privileges and non of it would work on Windows.

So I am favouring a home-made hostname or ip-address filter, similar to /etc/hosts.allow, with ACL 
stored, for example, in "/experiment/security/mhttpd hosts".

Any thoughts?

K.O.
  383   07 Jun 2007 John M O'DonnellSuggestionRFC- ACLs for midas rpc, mserver, mhttpd access
I am in favor of tcp_wrappers.

tcp_wrappers is well understood.

It works well in combination with a firewall.

mhttpd hangs when our security folks scan us.  We are not allowed to block them
with a firewall, but we can use tcpwrappers.

Would it make sense to put the same mechanism on mserver?

the man page for libtcpwrappers.a (taken from the tcpwrappers7.6 tar ball) is
attached. And the output after running it through nroff -man.

The odb is too fragile for security.  It is not understood well enough by many
experimenters.

As you can see I am in favor of tcp_wrappers.  This is mainly because it is part
of an existing and tested security model.  I don't know about the windows
world, but as you can also see, I vote for using something that is already part
of the windows security model.  Here's an example of how well the integrated
security model works:

    if an person is part of an experiment I make sure they can ssh to the
    experiment's computer

    the same rules could provide them with web access

Second is that when a change is needed to the security model then it is easy to
keep it current.  What if somebody restores an old ODB?  What if they setup a
small test with a new ODB?

If mhttpd used tcp_wrappers, then all our machines here at LANL would already be
configured!  No need for
users to do any root access (though those that need it have it anyway).

John.
Attachment 1: hosts_access.3-nroffed
Attachment 2: hosts_access.3
.TH HOSTS_ACCESS 3
.SH NAME
hosts_access, hosts_ctl, request_init, request_set \- access control library
.SH SYNOPSIS
.nf
#include "tcpd.h"

extern int allow_severity;
extern int deny_severity;

struct request_info *request_init(request, key, value, ..., 0)
struct request_info *request;

struct request_info *request_set(request, key, value, ..., 0)
struct request_info *request;

int hosts_access(request)
struct request_info *request;

int hosts_ctl(daemon, client_name, client_addr, client_user)
char *daemon;
char *client_name;
char *client_addr;
char *client_user;
.fi
.SH DESCRIPTION
The routines described in this document are part of the \fIlibwrap.a\fR
library. They implement a rule-based access control language with
optional shell commands that are executed when a rule fires.
.PP
request_init() initializes a structure with information about a client
request. request_set() updates an already initialized request
structure. Both functions take a variable-length list of key-value
pairs and return their first argument.  The argument lists are
terminated with a zero key value. All string-valued arguments are
copied. The expected keys (and corresponding value types) are:
.IP "RQ_FILE (int)"
The file descriptor associated with the request.
.IP "RQ_CLIENT_NAME (char *)"
The client host name.
.IP "RQ_CLIENT_ADDR (char *)"
A printable representation of the client network address.
.IP "RQ_CLIENT_SIN (struct sockaddr_in *)"
An internal representation of the client network address and port.  The
contents of the structure are not copied.
.IP "RQ_SERVER_NAME (char *)"
The hostname associated with the server endpoint address.
.IP "RQ_SERVER_ADDR (char *)"
A printable representation of the server endpoint address.
.IP "RQ_SERVER_SIN (struct sockaddr_in *)"
An internal representation of the server endpoint address and port.
The contents of the structure are not copied.
.IP "RQ_DAEMON (char *)"
The name of the daemon process running on the server host.
.IP "RQ_USER (char *)"
The name of the user on whose behalf the client host makes the request.
.PP
hosts_access() consults the access control tables described in the
\fIhosts_access(5)\fR manual page.  When internal endpoint information
is available, host names and client user names are looked up on demand,
using the request structure as a cache.  hosts_access() returns zero if
access should be denied.
.PP
hosts_ctl() is a wrapper around the request_init() and hosts_access()
routines with a perhaps more convenient interface (though it does not
pass on enough information to support automated client username
lookups).  The client host address, client host name and username
arguments should contain valid data or STRING_UNKNOWN.  hosts_ctl()
returns zero if access should be denied.
.PP
The \fIallow_severity\fR and \fIdeny_severity\fR variables determine
how accepted and rejected requests may be logged. They must be provided
by the caller and may be modified by rules in the access control
tables.
.SH DIAGNOSTICS
Problems are reported via the syslog daemon.
.SH SEE ALSO
hosts_access(5), format of the access control tables.
hosts_options(5), optional extensions to the base language.
.SH FILES
/etc/hosts.allow, /etc/hosts.deny, access control tables.
.SH BUGS
hosts_access() uses the strtok() library function. This may interfere
with other code that relies on strtok().
.SH AUTHOR
.na
.nf
Wietse Venema (wietse@wzv.win.tue.nl)
Department of Mathematics and Computing Science
Eindhoven University of Technology
Den Dolech 2, P.O. Box 513, 
5600 MB Eindhoven, The Netherlands
\" @(#) hosts_access.3 1.8 96/02/11 17:01:26
  384   08 Jun 2007 Stefan RittSuggestionRFC- ACLs for midas rpc, mserver, mhttpd access
First I have a general question: mserver is started through xinetd, and xinetd has
the options "only_from" and "no_access". This is equivalent to the tcp_wrapper
functionality. Why not using this? It's possible without changing anything in midas.
Or am I missing anything?

If that does not work for some reason, here are some thought from my side:

- We don't have much of a problem with malicious hackers, but with institute-wide
security checking. Hackers are only interested in mechanisms where they can obtain
control over thousands of machines (like breaking ssh etc.). The few midas machines
are not a good target for them. But even at PSI there are security scans, which try
to connect to various ports and can crash systems, so I agree that something needs
to be done.

- Whatever we do, it should be consistent on linux and windows and should not rely
on external packages, since I don't want to get into dependencies there.

- I see that both having the security information in the ODB or having them in
external files can be advantageous. There is certainly the aspect of restoring old
ODBs, or keeping several experiments (ODB) on one machine consistent. On the other
hand storing data in the ODB might me liked by people who are familiar with this
concept, and want to change things though mhttpd for example.

- Having said all that, it would make sense to me to write a simple central routine
access_allowed(), which takes the IP address of a remote client wanting to connect,
and return true or false. This routine should read /etc/hosts.allow, /etc/hosts.deny
and interprete it, but only the section for midas, and maybe only a subset of the
functionality there (we probably don't need NIS netgroup names, external files and
spawn commands there). If the files /etc/hosts.x do not contain anything about midas
or are not preset (Windows!), the routine should look in the ODB under
/experiment/security/mserver/hosts.allow and /experiment/security/mserver/hosts.deny
and use that information instead of the files.

- We probably need different mechanisms for mserver and for mhttpd. The mserver
clients are usually only a few programs like the front-ends, while one may want to
control an experiment over mhttpd from much more machines. So we should establish a
second ACL for mhttpd. The already present "/experiment/security/allowed hosts" for
mhttpd should be converted into "/Experiment/Security/mhttpd/hosts.allow" and the
function access_allowed() should be used to interprete that, so that we only need to
write it once.
  454   07 Mar 2008 Konstantin OlchanskiSuggestionRFC- ACLs for midas rpc, mserver, mhttpd access
The mhttpd host-based access control list as used by ALPHA at CERN is now committed to
SVN (revision 4135).

When accepting connection from a remote host, the remote IP address is converted to a
hostname using gethostbyaddr(). If ODB directory "/experiment/security/mhttpd hosts",
exists, access is permitted if there is an entry for the this hostname. "localhost" is
always permitted.

In other words:

1) To enable the mhttpd access control list, create an ODB directory
"/experiment/security/mhttpd hosts".

2) From this moment, only access from "localhost" is permitted.

3) All connections from remote hosts are rejected with an error written into the midas
log file: Rejecting http connection from 'ladd05.triumf.ca'.

4) To permit access from remote hosts, take the hostname from this error message and
create an entry in "mhttpd hosts": odbedit -> cd "/Experiment/Security/mhttpd hosts" ->
create INT ladd05.triumf.ca

The idea behind this is that mhttpd is running behind an SSL proxy (or an SSH tunnel)
and only accepts connections from this proxy and perhaps from selected machines in the
experiment counting room.

P.S. I considered using tcp_wrappers, but this package does not seem to contain any
simple-to-use function "bool areTheyPermitted(const char* remoteHostname);".

P.P.S. The ODB path name is in variance from Stefan's email. I committed this code
before rereading it, please let me know if I should change the ODB paths.

P.P.P.S. I will now proceed with implementing similar code for the mserver/midas rpc.
Again, the use case is very simple: all machines permitted access to the mserver are
known in advance and can be listed in the access list. All unknown machines should be
rejected.

K.O.
  457   10 Mar 2008 Stefan RittSuggestionRFC- ACLs for midas rpc, mserver, mhttpd access
> When accepting connection from a remote host, the remote IP address is converted to a
> hostname using gethostbyaddr(). If ODB directory "/experiment/security/mhttpd hosts",
> exists, access is permitted if there is an entry for the this hostname. "localhost" is
> always permitted.

While your "positive list" will certainly work, it is much more inflexible than a more
general hosts.allow/hosts.deny with wildcards. Assume some experiment decides it wants to
be controlled from all inside CERN. With hosts.allow/deny you could do

host.deny *
host.allow *.cern.ch

to have everything ending with "cern.ch" allowed. Otherwise it would be a nightmare finding
all possible terminals at CERN and add them manually. If you are considering modifying your
committed code to this scheme, you could have a look at my elog package, where exactly this
is implemented. You could copy/paste it from there.

After you finished, also talk to Pierre about documenting this in doxygen (or do it yourself).
  460   10 Mar 2008 Konstantin OlchanskiSuggestionRFC- ACLs for midas rpc, mserver, mhttpd access
> While your "positive list" will certainly work, it is much more inflexible than a more
> general hosts.allow/hosts.deny with wildcards. Assume some experiment decides it wants to
> be controlled from all inside CERN. With hosts.allow/deny you could do

I was going to bring this up later, but since mhttpd does not pass security audits, I believe
the only way it should be run in the modern computing environement is behind
a password-protected SSL proxy. In this case, the allow/deny list is very simple: deny all,
allow localhost (assuming httpd runs on the same host as mhttpd).

Speaking about CERN, "deny all; allow *.cern.ch" is the "default" setting, enforced by the CERN firewall. Our problem is with 
random "*.cern.ch" computers poking at our DAQ and crashing the mserver. Plus we do not want our competition to access our 
DAQ system, so "allow *.cern.ch" is a no go.

But since hosts.allow/hosts.deny is a superset of what I want, and since we can reuse existing code from elogd, I guess I have 
no ground to object your suggestion.

I will do the mserver/mrpc this way, then retrofit it into mhttpd. (But have to commit mlogger history changes first!!!).

K.O.
  461   10 Mar 2008 Stefan RittSuggestionRFC- ACLs for midas rpc, mserver, mhttpd access
> I was going to bring this up later, but since mhttpd does not pass security audits, I believe
> the only way it should be run in the modern computing environement is behind
> a password-protected SSL proxy.

I recently built in native SSL into elogd.c and found it was very simple. We could do the same for mhttpd.

> Speaking about CERN, "deny all; allow *.cern.ch" is the "default" setting, enforced by the CERN firewall. Our problem is with 
> random "*.cern.ch" computers poking at our DAQ and crashing the mserver. Plus we do not want our competition to access our 
> DAQ system, so "allow *.cern.ch" is a no go.

I understand your point. But I want to tell you that there are other experiments, which want domain based access. For example at
PSI some experiments want allowed access from the experimental hall, which is the subdomain 129.129.140.* (there is not so much
competition here ;-) but not from other PSI subdomains. So you would need "deny all; allow 129.129.140.*; allow 129.129.228.*" for
example.

> I will do the mserver/mrpc this way, then retrofit it into mhttpd. (But have to commit mlogger history changes first!!!).

Agree.
  1888   26 Apr 2020 Yu Chen (SYSU)ForumQuestions and discussions on the Frontend ODB tree structure.
Dear MIDAS developers and colleagues,

    This is Yu CHEN of School of Physics, Sun Yat-sen University, China, working in the PandaX-III collaboration, an experiment under development to search the neutrinoless double 
beta decay. We are working on the DAQ and slow control systems and would like to use Midas framework to communicate with the custom hardware systems, generally via Ethernet 
interfaces. So currently we are focusing on the development of the FRONTEND program of Midas and have some questions and discussions on its ODB structure. Since I’m still not 
experienced in the framework, it would be precious that you can provide some suggestions on the topic.
    The current structure of the frontend ODB tree we have designed, together with our understanding on them, is as follows:
      /Equipment/<frontend_name>/
        ->  /Common/: Basic controls and left unchanged.
        ->  /Variables/: (ODB records with MODE_READ) Monitored values that are AUTOMATICALLY updated from the bank data within each packed event. It is done by the update_odb() 
function in mfe.cxx.
        ->  /Statistics/: (ODB records with MODE_WRITE) Default status values that are AUTOMATICALLY updated by the calling of db_send_changed_records() within the mfe.cxx.
        ->  /Settings/: All the user defined data can be located here. 
            -> /stat/: (ODB records with MODE_WRITE) All the monitored values as well as program internal status. The update operation is done simultaneously when 
db_send_changed_records() is called within the mfe.cxx.
            -> /set/: (ODB records with MODE_READ) All the “Control” data used to configure the custom program and custom hardware.

    For our application, some of the our detector equipment outputs small amount of status and monitored data together with the event data, so we currently choose not to use EQ_SLOW 
and 3-layer drivers for the readout. Our solution is to create two ODB sub-trees in the /Settings/ similar to what the device_driver does. However, this could introduce two 
troubles:
    1) For /Settings/stat/: To prevent the potential destroy on the hot-links of /Variables/ and /Statistics/ sub-trees, all our status and monitored data are stored separately in 
the /Settings/stat/ sub-tree. Another consideration is that some monitored data are not directly from the raw data, so packaging them into the Bank for later showing in /Variables/ 
could somehow lead to a complicated readout() function. However, this solution may be complicated for history loggings. I have find that the ANALYZER modules could provide some 
processes for writing to the /Variables/ sub-tree, so I would like to know whether an analyzer should be used in this case.
    2) For /Settings/set/: The “control” data (similar to the “demand” data in the EQ_SLOW equipment) are currently put in several /Settings/set/ sub-trees where each key in 
them is hot-linked to a pre-defined hardware configuration function. However, some control operations are not related to a certain ODB key or related to several ODB keys (e.g. 
configuration the Ethernet sockets), so the dispatcher function should be assigned to the whole sub-tree, which I think can slow the response speed of the software. What we are 
currently using is to setup a dedicated “control key”, and then the input of different value means different operations (e.g. 1 means socket opening, 2 means sending the UDP 
packets to the target hardware, et al.). This “control key” is also used to develop the buttons to be shown on the Status/Custom webpage. However, we would like to have your 
suggestions or better solutions on that, considering the stability and fast response of the control.

    We are not sure whether the above understanding and troubles on the Midas framework are correct or they are just due to our limits on the knowledge of the framework, so we 
really appreciate your knowledge and help for a better using on Midas. Thank you so much!
  1889   26 Apr 2020 Stefan RittForumQuestions and discussions on the Frontend ODB tree structure.
Dear Yu Chen,

in my opinion, you can follow two strategies:

1) Follow the EQ_SLOW example from the distribution, and write some device driver for you hardware to control. There are dozens of experiments worldwide which use that scheme and it works ok for lots of 
different devices. By doing that, you get automatically things like write cache (only write a value to the actual device if it really has chande) and dead band (only write measured data to the ODB if it changes more 
than a certain value to suppress noise). Furthermore, you get events created automatically and history for all your measured variables. This scheme might look complicated on the first look, and it's quite old-
fashioned (no C++ yet) but it has proven stable over the last ~20 years or so. Maybe the biggest advantage of this system is that each device gets its own readout and control thread, so one slow device cannot 
block other devices. Once this has been introduced more than 10 years ago, we saw a big improvement in all experiments.

2) Do everything by yourself. This is the way you describe below. Of course you are free to do whatever you like, you will find "special" solutions for all your problems. But then you move away from the "standard 
scheme" and you loose all the benefits I described under 1) and of course you are on your own. If something does not work, it will be in your code and nobody from the community can help you.

So choose carefully between 1) and 2).

Best regards,
Stefan

> Dear MIDAS developers and colleagues,
> 
>     This is Yu CHEN of School of Physics, Sun Yat-sen University, China, working in the PandaX-III collaboration, an experiment under development to search the neutrinoless double 
> beta decay. We are working on the DAQ and slow control systems and would like to use Midas framework to communicate with the custom hardware systems, generally via Ethernet 
> interfaces. So currently we are focusing on the development of the FRONTEND program of Midas and have some questions and discussions on its ODB structure. Since I’m still not 
> experienced in the framework, it would be precious that you can provide some suggestions on the topic.
>     The current structure of the frontend ODB tree we have designed, together with our understanding on them, is as follows:
>       /Equipment/<frontend_name>/
>         ->  /Common/: Basic controls and left unchanged.
>         ->  /Variables/: (ODB records with MODE_READ) Monitored values that are AUTOMATICALLY updated from the bank data within each packed event. It is done by the update_odb() 
> function in mfe.cxx.
>         ->  /Statistics/: (ODB records with MODE_WRITE) Default status values that are AUTOMATICALLY updated by the calling of db_send_changed_records() within the mfe.cxx.
>         ->  /Settings/: All the user defined data can be located here. 
>             -> /stat/: (ODB records with MODE_WRITE) All the monitored values as well as program internal status. The update operation is done simultaneously when 
> db_send_changed_records() is called within the mfe.cxx.
>             -> /set/: (ODB records with MODE_READ) All the “Control” data used to configure the custom program and custom hardware.
> 
>     For our application, some of the our detector equipment outputs small amount of status and monitored data together with the event data, so we currently choose not to use EQ_SLOW 
> and 3-layer drivers for the readout. Our solution is to create two ODB sub-trees in the /Settings/ similar to what the device_driver does. However, this could introduce two 
> troubles:
>     1) For /Settings/stat/: To prevent the potential destroy on the hot-links of /Variables/ and /Statistics/ sub-trees, all our status and monitored data are stored separately in 
> the /Settings/stat/ sub-tree. Another consideration is that some monitored data are not directly from the raw data, so packaging them into the Bank for later showing in /Variables/ 
> could somehow lead to a complicated readout() function. However, this solution may be complicated for history loggings. I have find that the ANALYZER modules could provide some 
> processes for writing to the /Variables/ sub-tree, so I would like to know whether an analyzer should be used in this case.
>     2) For /Settings/set/: The “control” data (similar to the “demand” data in the EQ_SLOW equipment) are currently put in several /Settings/set/ sub-trees where each key in 
> them is hot-linked to a pre-defined hardware configuration function. However, some control operations are not related to a certain ODB key or related to several ODB keys (e.g. 
> configuration the Ethernet sockets), so the dispatcher function should be assigned to the whole sub-tree, which I think can slow the response speed of the software. What we are 
> currently using is to setup a dedicated “control key”, and then the input of different value means different operations (e.g. 1 means socket opening, 2 means sending the UDP 
> packets to the target hardware, et al.). This “control key” is also used to develop the buttons to be shown on the Status/Custom webpage. However, we would like to have your 
> suggestions or better solutions on that, considering the stability and fast response of the control.
> 
>     We are not sure whether the above understanding and troubles on the Midas framework are correct or they are just due to our limits on the knowledge of the framework, so we 
> really appreciate your knowledge and help for a better using on Midas. Thank you so much!
  579   18 May 2009 Exaos LeeSuggestionQuestion about using mvmestd.h
The "mvmestd.h" uses the following function to open a VME device:
int mvme_open(MVME_INTERFACE **vme, int idx)
I found that the "driver/vme/sis3100/sis3100.c" uses the implementation as:
   /* open VME */
   sprintf(str, "/dev/sis1100_%02dremote", idx);
   (*vme)->handle = open(str, O_RDWR, 0);
   if ((*vme)->handle < 0)
      return MVME_NO_INTERFACE;
   }

The problem is: I renamed my SIS1100 devices as /dev/sis1100/xxxxx. So I have to hack the "sis3100.c".
Shall we have some smart way? Smile
  580   18 May 2009 Stefan RittSuggestionQuestion about using mvmestd.h

Exaos Lee wrote:
The "mvmestd.h" uses the following function to open a VME device:
int mvme_open(MVME_INTERFACE **vme, int idx)
I found that the "driver/vme/sis3100/sis3100.c" uses the implementation as:
   /* open VME */
   sprintf(str, "/dev/sis1100_%02dremote", idx);
   (*vme)->handle = open(str, O_RDWR, 0);
   if ((*vme)->handle < 0)
      return MVME_NO_INTERFACE;
   }

The problem is: I renamed my SIS1100 devices as /dev/sis1100/xxxxx. So I have to hack the "sis3100.c".
Shall we have some smart way? Smile


In principle one could pass the device name to the user level. But I would like to keep the same code for Windows and Linux, and Windows does not need a device name. So you can either hack the file (I'm pretty sure it won't change in the next few years) or what I do is to make a symbolic link

/dev/sis1100/xxxx -> /dev/sis1100_00remote

Best regards,

Stefan
  581   19 May 2009 Konstantin OlchanskiSuggestionQuestion about using mvmestd.h
> The problem is: I renamed my SIS1100 devices as /dev/sis1100/xxxxx. So I have to hack the 
"sis3100.c".

As in the old joke, "Doctor, it hurts when I do *this*; Doctor answers: then don't do it!"

But I am curious why you want to change the "manufacturer-default" device names. For the vmivme.c and 
gefvme.c drivers that we use at TRIUMF, there is no obvious reason or gain from changing device names.

K.O.
  582   20 May 2009 Exaos LeeSuggestionQuestion about using mvmestd.h
> > The problem is: I renamed my SIS1100 devices as /dev/sis1100/xxxxx. So I have to hack the 
> "sis3100.c".
> 
> As in the old joke, "Doctor, it hurts when I do *this*; Doctor answers: then don't do it!"
> 
> But I am curious why you want to change the "manufacturer-default" device names. For the vmivme.c and 
> gefvme.c drivers that we use at TRIUMF, there is no obvious reason or gain from changing device names.
> 
> K.O.

I used the old V2.04 driver for SIS1100/SIS3100. The old package contains a script which creates devices
as /tmp/sis1100_XXXX. So I created another script and installed it into /etc/init.d/. That script can be
invoked by using standard rc.d tools. In order to make the /dev directory tidy, it creates device files
into just one directory as /dev/sis1100/. That's the story.

Now, I found, the new sis1100.ko of version 2.12 can create devices automatically as /dev/sis1100_xxxx.
So, my script can be retired now. And also, I needn't to hack the "sis3100.c" anymore.
  1379   21 Jul 2018 Hiroaki NatoriForumQuestion about distributing event builder function on remote PC
Dear expert,

I'm going to develop MIDAS DAQ for COMET experiment.
I'm thinking to distribute the load of event building to different PCs.
I attach a schematics of one of the examples of the design.
Please tell me how can I accomplish a kind of "sub-EventBuilder".
I'm reading the midas code to understand the scheme of MIDAS but
it is a lot and I want to know which one to focus on.
Can I do it writing user function based on either "mfe.c" or "mevb.c"?
Frontend program with multithread equipment is the one to do?
Or should I modify the original midas files?


Best regards,
Hiroaki Natori
Attachment 1: MIDAS_tmp.pdf
MIDAS_tmp.pdf
  1380   23 Jul 2018 Konstantin OlchanskiForumQuestion about distributing event builder function on remote PC
> I'm going to develop MIDAS DAQ for COMET experiment.
> I'm thinking to distribute the load of event building to different PCs.
> I attach a schematics of one of the examples of the design.

Your schematic is reminiscent of the T2K/ND280 structure where the MIDAS DAQ
was split into several separate MIDAS instances (separate "experiments": the FGD, the TPC, 
the slow controls, etc).

They were joined together by the "cascade" equipment which provided a path
for the data events to flow from subsidiary midas instances to the main system (the one
with the final mlogger). It also provided a reverse path for run control, where starting
a run in the main experiment also started the run in all the subsidiary experiments.

This cascade frontend was never included in the midas distribution (an oversight),
but I still have the code for it somewhere.

How many "frontend PC" components do you envision? (10, 100, 1000?).

In T2K/ND280, each subsidiary experiment had it's own ODB which made sense
because e.g. the FGD and the TPC were quite different and were managed by different
groups.

But for you it probably makes sense to have one common ODB. This means a MIDAS
structure where ODB is located on the main computer ("event builder PC"),
all others connect to it via the mserver and midas rpc.

But you will need to have the MIDAS shared event buffers on each "frontend PC" to be local,
which means the bm_xxx() functions have run locally instead of throuhg the mserver rpc.
This is not how midas works right now, but it could be modified to do this.

On the other hand, you do not have to use midas to write the "frontend pc" code. Today's
C++ provides enough features - threads, locks, mutexes, shared memories, event queues,
etc so you can write the whole sub-event builder as one monolithic c++ program
and use midas only to send the data to the main event builder. (plus midas rpc to handle
run control). In this scheme, technically, this "frontend pc" program would
be a multithreaded midas frontend.

K.O.
  1381   28 Jul 2018 Hiroaki NatoriForumQuestion about distributing event builder function on remote PC
Dear Mr. Olchanski

Thank you for your comment.
We exect the number of readout channels is ~1000, boards ~100 and the frontend pc <10.	
We expect that trigger rate is a few kHz.

Writing monolithic c++ code may need complete understanding on midas,
and I will consider more about writing from scratch or modifying midas code.

Best regards
Hiroaki Natori

> > I'm going to develop MIDAS DAQ for COMET experiment.
> > I'm thinking to distribute the load of event building to different PCs.
> > I attach a schematics of one of the examples of the design.
> 
> Your schematic is reminiscent of the T2K/ND280 structure where the MIDAS DAQ
> was split into several separate MIDAS instances (separate "experiments": the FGD, the TPC, 
> the slow controls, etc).
> 
> They were joined together by the "cascade" equipment which provided a path
> for the data events to flow from subsidiary midas instances to the main system (the one
> with the final mlogger). It also provided a reverse path for run control, where starting
> a run in the main experiment also started the run in all the subsidiary experiments.
> 
> This cascade frontend was never included in the midas distribution (an oversight),
> but I still have the code for it somewhere.
> 
> How many "frontend PC" components do you envision? (10, 100, 1000?).
> 
> In T2K/ND280, each subsidiary experiment had it's own ODB which made sense
> because e.g. the FGD and the TPC were quite different and were managed by different
> groups.
> 
> But for you it probably makes sense to have one common ODB. This means a MIDAS
> structure where ODB is located on the main computer ("event builder PC"),
> all others connect to it via the mserver and midas rpc.
> 
> But you will need to have the MIDAS shared event buffers on each "frontend PC" to be local,
> which means the bm_xxx() functions have run locally instead of throuhg the mserver rpc.
> This is not how midas works right now, but it could be modified to do this.
> 
> On the other hand, you do not have to use midas to write the "frontend pc" code. Today's
> C++ provides enough features - threads, locks, mutexes, shared memories, event queues,
> etc so you can write the whole sub-event builder as one monolithic c++ program
> and use midas only to send the data to the main event builder. (plus midas rpc to handle
> run control). In this scheme, technically, this "frontend pc" program would
> be a multithreaded midas frontend.
> 
> K.O.
ELOG V3.1.4-2e1708b5