Back Midas Rome Roody Rootana
  Midas DAQ System, Page 111 of 139  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
    Reply  22 Jun 2004, Konstantin Olchanski, , How to compile under Darwin-gcc? (MacOS X) 
The current (cvs) version of MIDAS should build on Mac OS X right out of the
box- I fixed all the problems you report back in February(?)- see the macosx
thread in this forum. A few weeks ago I verified that it still compiles on Mac
OS 10.3.4. The Mac OS port received minimal testing- I checked that "odbedit"
and "mhttpd" run, that's about it. K.O.

> I add the following to makefile and try to treat Darwin as FreeBSD/Linux.
> But I failed.
> ============ 
> #-----------------------
> # This is for MacOS X
> #
> ifeq ($(OSTYPE), Darwin)
> CC = gcc
> OS_DIR = Darwin
> OSFLAGS = -DOS_DARWIN -DOS_LINUX
> LIBS = -lbsd -lcompat
> SPECIFIC_OS_PRG =
> endif
> ============
> 
> I got the following errors:
> =============
> gcc -c -g -O2 -Wall -Iinclude -Idrivers -LDarwin/lib -DINCLUDE_FTPLIB  
> -DOS_DARWIN -DOS_FREEBSD -o Darwin/lib/midas.o src/midas.c
> In file included from include/midasinc.h:45,
>                  from include/msystem.h:114,
>                  from src/midas.c:623:
> /usr/include/string.h:112: error: conflicting types for `strlcat'
> include/midas.h:1701: error: previous declaration of `strlcat'
> /usr/include/string.h:113: error: conflicting types for `strlcpy'
> include/midas.h:1700: error: previous declaration of `strlcpy'
> In file included from include/msystem.h:114,
>                  from src/midas.c:623:
> include/midasinc.h:161:21: sys/vfs.h: No such file or directory
> include/midasinc.h:164:17: pty.h: No such file or directory
> src/midas.c:780: error: conflicting types for `dbg_malloc'
> include/midas.h:1478: error: previous declaration of `dbg_malloc'
> src/midas.c:817: error: conflicting types for `dbg_calloc'
> include/midas.h:1479: error: previous declaration of `dbg_calloc'
> src/midas.c:858: error: conflicting types for `strlcpy'
> /usr/include/string.h:113: error: previous declaration of `strlcpy'
> src/midas.c:892: error: conflicting types for `strlcat'
> /usr/include/string.h:112: error: previous declaration of `strlcat'
> gmake: *** [Darwin/lib/midas.o] Error 1
> ==========
> 
> Could anyone give me some hints. Thanks!
    Reply  23 Jun 2004, Exaos Lee, , How to compile under Darwin-gcc? (MacOS X) 
> The current (cvs) version of MIDAS should build on Mac OS X right out of the
> box- I fixed all the problems you report back in February(?)- see the macosx
> thread in this forum. A few weeks ago I verified that it still compiles on Mac
> OS 10.3.4. The Mac OS port received minimal testing- I checked that "odbedit"
> and "mhttpd" run, that's about it. K.O.
> 

Thanks a lot. But I cannot checkout module:
------------
01:52:16: pc2075.psi.ch: Operation timed out
01:52:16: cvs [checkout aborted]: end of file from server (consult above
messages if any)
------------

Could anybody add download tar package in the WWW interface of CVS repository.
I know the original CGI script has such a feature. Thanks.

P.S.
I use these commands to checkout:
   cvs -e ssh -d :ext:cvs@midas.psi.ch:/usr/local/cvsroot checkout midas
   cvs -e ssh -d :ext:cvs@midas.psi.ch:/usr/local/cvsroot update
    Reply  23 Jun 2004, Stefan Ritt, , How to compile under Darwin-gcc? (MacOS X) 
> Thanks a lot. But I cannot checkout module:
> ------------
> 01:52:16: pc2075.psi.ch: Operation timed out
> 01:52:16: cvs [checkout aborted]: end of file from server (consult above
> messages if any)
> ------------

Should work fine, just tried from outside PSI. Please check again.

> Could anybody add download tar package in the WWW interface of CVS repository.
> I know the original CGI script has such a feature. Thanks.

The tar package is only done for a new release (which will happen in the next days
BTW), so http://midas.psi.ch/download/tar/ contains the most recent packages. Upon
request I make a midas-snapshot.tar.gz, but since there will be a 1.9.4 soon, it's
maybe not necessary right now.
    Reply  23 Jun 2004, Exaos Lee, , How to compile under Darwin-gcc? (MacOS X) 
> 
> Should work fine, just tried from outside PSI. Please check again.

Unfortunately, I still encounter the same problem. 
---
pc2075.psi.ch: Operation timed out
cvs [checkout aborted]: end of file from server (consult above messages if any)
---

I am in LNS-INFN (Italy), i.e., I am outside PSI. So ... what's the problem? I try to
ping the host, and it is reachable:
--------
[exaos@exaos cvsnew]$ ping midas.psi.ch
PING pc2075.psi.ch (129.129.228.23): 56 data bytes
64 bytes from 129.129.228.23: icmp_seq=0 ttl=50 time=67.237 ms
64 bytes from 129.129.228.23: icmp_seq=1 ttl=50 time=64.202 ms
64 bytes from 129.129.228.23: icmp_seq=2 ttl=50 time=56.278 ms
...
--------
Is it the problem of firewall? I am not sure. So strange.

> 
> The tar package is only done for a new release (which will happen in the next days
> BTW), so http://midas.psi.ch/download/tar/ contains the most recent packages. Upon
> request I make a midas-snapshot.tar.gz, but since there will be a 1.9.4 soon, it's
> maybe not necessary right now.

Waiting for the new release ...
Entry  30 Mar 2011, Exaos Lee, Forum, How large does "bank32" support? 
Reading an FADC buffer often needs large buffer size, especially while several
FADCs work together. I want to know how large a bank32 can support.
    Reply  30 Mar 2011, Stefan Ritt, Forum, How large does "bank32" support? 
> Reading an FADC buffer often needs large buffer size, especially while several
> FADCs work together. I want to know how large a bank32 can support.

The "32" in bank32 means 32-bits, so the bank holds 2^32=4 GBytes, hope that is enough for your ADC. 
The "normal" bank has only a 16-bit header, so it can hold only 64 kBytes. But for small banks, the overhead 
is therefore smaller.
    Reply  15 Apr 2011, Konstantin Olchanski, Forum, How large does "bank32" support? 
> Reading an FADC buffer often needs large buffer size, especially while several
> FADCs work together. I want to know how large a bank32 can support.

Limitations in order:

- bank32 size is limited to a 32 bit integer size (about 4000 Gbytes)
- bank size is limited by event size
- event size in a midas mfe.c based frontend is limited to the value of
max_event_size set by the user
- maximum event size that can go through the MIDAS event buffer system is limited
to ODB value /Experiment/MAX_EVENT_SIZE (MAX_EVENT_SIZE in midas.h does not do
anything now)
- maximum event size is limited to *half* the size of the SYSTEM shared memory
event buffer (or any other buffers that the event has to go through)
- default size of the SYSTEM buffer is 8 Mbytes set by ODB /Experiment/Buffer
sizes/SYSTEM. This limits maximum event size to about 4 Mbytes.
- size of SYSTEM buffer can be increased to arbitrary size, but in practice no
bigger than the amount of computer physical memory minus space needed for running
the frontend program and the mlogger, which also allocate buffer space to hold 1
event of maximum size.

So for a computer with 8 Gbytes of RAM, you can make the SYSTEM buffer size 4
GBytes, set ODB MAX_EVENT_SIZE to 1 Gbyte, and in theory, you should be able to
write 1 Gbyte events from your frontend to disk.

In practice, I think the biggest events I have seen go through a MIDAS system are
non-compressed waveforms in the T2K/ND280 FGD and TPC detectors, about 4 Mbytes of
data per event.

Other considerations (rules of thumb):

1) the SYSTEM event buffer should be big enough to hold 10-100 events.
2) the SYSTEM event buffer should be big enough to hold about 5-10 seconds of data
- i.e. if your event size is 1 Mbyte and data rate is 1 Hz, 10 seconds of data
will be 1Mbyte*1Hz*10sec = 10 Mbytes.

This is because the SYSTEM buffer decouples the real-time activity of the frontend
program from non-real-time activity of writing data to storage devices.

K.O.
Entry  22 Dec 2005, Konstantin Olchanski, Info, How do I do custom event building? 
It turns out the the standard event builder fragment matching algorithm cannot
be used in my TPC application. I have two TPC-USB interfaces, which lack any
"busy" or synchronization logic. I send the hardware trigger into both
interfaces, and if one of them misses it, the data is out of sync forever. Consider:

Hardware
trigger    trig1     trig2    trig3    trig4
TPC01      serial1   serial2  serial3  serial4
TPC02      serial1  (missing) serial2  serial3

With the event builder matching only the event serial numbers, the first event
will be okey, but the second event will have trig2 data from TPC01 and trig3
data from TPC02, etc.

The problem exists even if the TPC-USB interfaces do not miss any triggers:
during begin and end of run, the interfaces are enabled one at a time, so if a
trigger arrives after the first interface was enabled, but before the second is
enabled, the data starts being out of sync (and if the same happens during the
end-of-run, the event counts from both frontends will match, but all data would
*still* be out of sync).

Obviously additional data is needed to match the fragments.

So in each frontend, I have a high-precision timestamp (gettimeofday(), usec
resolution) and I would like to have the event builder match the timestamps
instead of event serial numbers. What is the best way to do this? The mevb.c
code does not have any user callbacks for checking "do these fragments belong to
the same event?".

P.S. The event rate will be about 1/sec from cosmic ray tests and at most
10-50/sec in the M11 beam line at TRIUMF, at these low rates, the gettimeofday()
timestamps should be adequate.

K.O.
    Reply  23 Dec 2005, Stefan Ritt, Info, How do I do custom event building? 
> It turns out the the standard event builder fragment matching algorithm cannot
> be used in my TPC application. I have two TPC-USB interfaces, which lack any
> "busy" or synchronization logic. I send the hardware trigger into both
> interfaces, and if one of them misses it, the data is out of sync forever. Consider:
> 
> Hardware
> trigger    trig1     trig2    trig3    trig4
> TPC01      serial1   serial2  serial3  serial4
> TPC02      serial1  (missing) serial2  serial3
> 
> With the event builder matching only the event serial numbers, the first event
> will be okey, but the second event will have trig2 data from TPC01 and trig3
> data from TPC02, etc.

Well, I would say: this is a very poor design of an experiment. Before curing the
problems in software, I first would consider a redesign of the data readout scheme with
a global hardware trigger and a hardware busy.

> So in each frontend, I have a high-precision timestamp (gettimeofday(), usec
> resolution) and I would like to have the event builder match the timestamps
> instead of event serial numbers.

What do you do if the frontend clock drifts away? I have seen drifts of up to 10 sec/day
on some PCs, so your required accuracy of 1/50 s would be violated after 3 minutes. You
would have to synchronize your clocks constantly. If your synchronization algorithm
determines a clock is out of sync and adjusts it, and the delta t is more than 1/50 sec,
you are screwed.

So all together I conclude that this proposed synchronization scheme is pretty dangerous
and could ruin the whole experiment.

> What is the best way to do this? The mevb.c
> code does not have any user callbacks for checking "do these fragments belong to
> the same event?".

Pierre can answer that.

- Stefan
    Reply  03 Jan 2006, John O'Donnell, Info, How do I do custom event building? 
At DANCE we have a similar issue.  We are still doing "software
handshaking" between multiple frontends (15 which read data, and 16th
with direct accessto the trigger logic), and we apply a time stamp
using gettimeofday().  We use the regular mevb, sorting on serial number.

In the analyzer (MIDAS or ROME) we then keep a big circular buffer of
event fragments, which are rebuilt into new events based on the time stamp
obtained from gettimeofday().  We keep the system clocks synchronized
(often to within about 1ms) using ntp (need to average over several
ntp servers to avoid issues with network noise).  ntp can take a while
to stabilize, so we never reboot our computers... (well almost never).
We have a slow control frontend which monitors the ntp time offsets and
put's them in the history system for easy visualization.

Occasionally we seem to get in a mess, but somehow this fixes itself on
the next run, so it has been a useable system.  Maybe one day we will
get hardware handshaking between the frontend computers and the trigger
logic, but in the meantime we are taking data.

John.
Entry  14 Oct 2014, Konstantin Olchanski, Bug Report, Hostile network scans against MIDAS RPC ports 
At CERN I see a large number of hostile network scans that seem to be injecting HTTP requests into the 
MIDAS RPC ports. So far, all these requests seem to be successfully rejected without crashing anything, but 
they do clog up midas.log.

The main problem here is that all MIDAS programs have at least one TCP socket open where they listen for 
RPC commands, such as "start of run", "please shutdown", etc. The port numbers of these sockets are 
randomized and that makes them difficult to protect them with firewall rules (firewall rules like fixed port 
numbers).

Note that this is different from the hostile network scans that I have first seen maybe 5 years ago that 
affected the mserver main listener socket. Then, as a solution, I hardened the RPC receiver code against 
bad data (and happy to see that this hardening is still holding up) and implemented the mserver "-A" 
command switch to specify a list of permitted peers. Also mserver uses a fixed port number ("-p" switch) 
and is easy to protect with firewall rules.

Since these ports cannot be protected by OS means (firewall, etc), we have to protect them in MIDAS.

One solution is to reject all connections from unauthorized peers.

One way to use this is to implement the "-A" switch to explicitely list all permitted peers, these switch will 
ave to be added to all long running midas programs (mhttpd, mlogger, mfe.c, etc). Not very practical, IMO.

Another way is to read the list of permitted peers from ODB, at startup time, or each time a new connection 
is made.

In the latter case, care needs to be taken to avoid deadlocks. For example remote programs that read ODB 
through the mserver may deadlock if the same mserver is the one trying to establish the RPC connection. 
Or if ODB is somehow locked.

NB - we already keep a list of permitted peers in ODB /Experiment/Security.

K.O.
    Reply  14 Oct 2014, Stefan Ritt, Bug Report, Hostile network scans against MIDAS RPC ports 
Doing this through the ODB seems ok to me. If the ODB cannot be accessed, you can fall back to no protection.

At PSI we fortunately do not have these network scans because PSI uses a institute-wide firewall. So you can connect from outside PSI to inside PSI only 
on certain well-defined ports (like SSH to certain machines). You can do the same in Alpha. Use one computer as a router with two network cards, where 
the DAQ network runs on the second card as a private network. Then program the routing tables in that gateway such that only certain ports can be 
accessed from outside, like port 8080 to mhttpd. This way you block all except the things which are needed.

/Stefan
    Reply  16 Oct 2014, Konstantin Olchanski, Bug Report, Hostile network scans against MIDAS RPC ports 
> Doing this through the ODB seems ok to me. If the ODB cannot be accessed, you can fall back to no protection.
> 
> At PSI we fortunately do not have these network scans because PSI uses a institute-wide firewall.
>

Same here at TRIUMF, no problems with hostile network activity. Only see this trouble at CERN. Nominally CERN also have
everything behind the CERN firewall, that is why I tend to think that I am seeing network scans done by CERN security people,
or some badniks on the CERN local network (PC malware, etc).

> So you can connect from outside PSI to inside PSI only 
> on certain well-defined ports (like SSH to certain machines). You can do the same in Alpha. Use one computer as a router with two network cards, where 
> the DAQ network runs on the second card as a private network. Then program the routing tables in that gateway such that only certain ports can be 
> accessed from outside, like port 8080 to mhttpd. This way you block all except the things which are needed.

Yes, this is how we did it for DEAP at SNOLAB. No network trouble there.

But generically for MIDAS, I think we should have built-in capability for MIDAS to protect itself without reliance on OS-level means (local firewall)
or network-level means ("site firewalls").

Sometimes we have very small MIDAS installations, i.e. just one machine by itself, and such setups should be secure/secured easily -
too much work to setup an external firewall box just for one machine and OS-level firewall rules sometimes conflict
with some OS services (i.e. NIS) (I am still waiting for the "NIS to LDAP migration for dummies" guide).

K.O.
    Reply  16 Oct 2014, Stefan Ritt, Bug Report, Hostile network scans against MIDAS RPC ports 
> Sometimes we have very small MIDAS installations, i.e. just one machine by itself, and such setups should be secure/secured easily -
> too much work to setup an external firewall box just for one machine and OS-level firewall rules sometimes conflict
> with some OS services (i.e. NIS) (I am still waiting for the "NIS to LDAP migration for dummies" guide).

I fully agree with you. So if you find time to implement this, I will be more than happy.

/Stefan
Entry  26 Jul 2019, Nik Berger, Bug Report, History/Endianness 
Hi,
I have a bank of floats with slow control values that I store to the history and
ODB. When reading the history, both in the webbrowser and with mhist, the floats
get read with the wrong endianness; under /equipment/variables in the ODB they
however display correctly. System is a an intel OpenSuse linux box. Any ideas?

Thanks

Nik
Entry  17 Jan 2024, Francesco Renga, Forum, History tags 
Dear experts,
         I would like to have some clarification about the meaning and use of the 
tags in the ODB under /History/Tags.

I noticed that, if a history plot is created, but the name of the corresponding 
variable is changed later and the plot is modified accordingly, the old name 
persists in the /History/Tags list along with the new one. So, it appears in the 
list of variables when a new history plot is created.

It seems not to compromise the functionalities of the history system, but it is 
prone to create confusion.

Is it the expected behavior? What is the correct procedure to follow if the name 
of a variable has to be changed?

Thank you,
     Francesco
    Reply  18 Jan 2024, Stefan Ritt, Forum, History tags 
This part of the system has been designed by KO, so he should reply here.

Stefan
    Reply  28 Jan 2024, Konstantin Olchanski, Forum, History tags 
> This part of the system has been designed by KO, so he should reply here.

That's right. Some of this stuff is historical gibberish that is no longer needed 
for FILE and SQL histories.

/History/Events is needed to create persistent mapping between history event names 
and history event id's (at some point history event id was same equipment event 
id, with the obvious problems when equipment event ids are duplicated, reused, 
renamed, deleted).

/History/Tags was used by the history editor to speed up "give me all tag names 
for this history event name". With the "MIDAS" history storage this required 
reading a lot of data from disk. With the "FILE" history and cached ZFS SSD, disk 
access is much cheaper and caching history event names and tags in odb is no 
longer necessary.

/History/Tags should probably be removed (be check that nobody uses it first).

/History/Events has to remain as long as "MIDAS" history storage is still used.

K.O.
Entry  28 May 2021, Joseph McKenna, Bug Report, History plots deceiving users into thinking data is still logging flatline.pngflatline.png
I have been trying to fix this myself but my javascript isn't strong... The 'new' history plot render fills in missing data with the last ODB value (even when this value is very old... elog:2180/1 shows this... The data logging stopped, but the history plot can fool users into thinking data is logging (The export button generates CSVs with entires every 10 seconds also). Grepping through the history files behind the scenes, I found only one match for an example variable from this plot, so it looks like there are no entries after March 24th (although I may be mistaken, I've not studied the history files data structure in detail), ie this is a artifact from the mhistory.js rather than the mlogger... Have I missed something simple? Would it be possible to not draw the line if there are no datapoints in a significant time? Or maybe render a dashed line that doesn't export to CSV? Thanks in advance Edit, I see certificate errors this forum and I think its preventing my upload an image... inlining it into the text here:
    Reply  28 May 2021, Stefan Ritt, Bug Report, History plots deceiving users into thinking data is still logging 

This is a known problem and I'm working on. See the discussion at: 

https://bitbucket.org/tmidas/midas/issues/305/log_history_periodic-doesnt-account-for

Stefan

ELOG V3.1.4-2e1708b5