ID |
Date |
Author |
Topic |
Subject |
1629
|
23 Jul 2019 |
Stefan Ritt | Forum | How to convert C midas frontends to C++ | Have you left any "extern C" in your frontend program or in any of the used header file. Seems
like the linker cannot find the poll_event in your frontend code. If it's there, but it's compiled
with C calling (instead of C++), the name mangling causes it to be invisible to the linker. That
usually happens if there is somewhere
extern C {
INT poll_event();
...
}
while it is NOT defined as "extern C" in mfe.h which is used by mfe.cxx.
Stefan
> > Did you include mfe.h as written in elog:1526 ?
> >
> > Stefan
>
>
> Yes I did
>
> this is my include
>
> #include <stdio.h>
> #include <string.h>
> #include <assert.h>
> #include <math.h>
> #include <pthread.h>
>
>
> #include "midas.h"
> #include "mscb.h"
> #include "multi.h"
> #include "mscbdev.h"
> #include "mfe.h"
>
> (I attach my dummy fe)
>
> What confuses me is that I can compile examples/experiment/ if I copy that to a
> fresh dir.
>
> I also copied the CMakeLists from the example:
>
> #
> # cmake for the muX software
> #
> cmake_minimum_required(VERSION 3.3)
>
> project(muX)
>
> #
> # find midas installation, from CMakeLists in examples/experiment
> #
> set(MIDAS_DIR $ENV{MIDASSYS})
> message("MIDAS dir: " ${MIDAS_DIR})
> if (NOT EXISTS $ENV{MIDASSYS})
> message(FATAL_ERROR "Environment variable $MIDASSYS not defined, aborting.")
> endif()
>
> set(INC_PATH ${MIDAS_DIR}/include ${MIDAS_DIR}/mxml ${MIDAS_DIR}/mscb/include
> ${MIDAS_DIR}/drivers/class ${MIDAS_DIR}/drivers/device)
> link_directories($ENV{MIDASSYS}/lib)
>
> # enable certain compile warnings
> add_compile_options(-Wall -Wformat=2 -Wno-format-nonliteral -Wno-strict-
> aliasing -Wuninitialized -Wno-unused-function)
>
> set(LIBS -lpthread -lutil -lrt)
>
>
> add_executable(sc_fe_mini sc_fe_mini.cpp)
> target_include_directories(sc_fe_mini PRIVATE ${INC_PATH})
> target_link_libraries(sc_fe_mini mfe midas ${LIBS}) |
1630
|
23 Jul 2019 |
Lukas Gerritzen | Forum | How to convert C midas frontends to C++ | Can you post the exact command that cmake executes to link sc_fe_mini (with make VERBOSE=1)?
I have noticed similar linking problems that depended on the order when linking. In my case, it
compiled when "-lpthread -lutil -lrt" were at the end of the command, but not before mfe.o and
libmidas.a. Unfortunately I haven't found a way to tell cmake the "correct" order of the link
libraries.
Maybe this can be fixed by adding midas as a subdirectory in your cmake project and just linking
against the "mfe" target instead of libmfe.a. |
1659
|
09 Aug 2019 |
Konstantin Olchanski | Forum | How to convert C midas frontends to C++ | > How do I solve mismatched declarations in the mfe (or other places in the midas code)?
I run into such problems all the time. My solution? I grep for the function name in my code and in the header file,
then look very carefully at the definition to confirm that all the argument declarations are the same in both
places. Sometimes my eyes do not see the difference and I ask for a "second pair of eyes".
In your case, you have a mismatch between functions in mfe.h and in your frontend. The difference
is "int source" in mfe.h and "int source[]" in your code.
Because C++ permits functions with identical namesm but different arguments, the compiler thinks
you did this on purpose and does not complain. Later, of course, the linker bombs,
but all it can report at this stage, is what you see "function not found"... Then you grep your code
for the missing function, check arguments, rinse, repeat.
Before C++, the C compiler would probably had complained about the mismatch, except that MIDAS
did not have an mfe.h header file definitions for all this stuff until just now, so again, the mismatch would
have gone unnoticed, unfixed.
K.O.
> It is having issues with the midas defined BOOL/... types. This
> is what I get for a minimal scfe:
>
> [ 12%] Building CXX object CMakeFiles/sc_fe_mini.dir/sc_fe_mini.cpp.o
> [ 25%] Building CXX object CMakeFiles/sc_fe_mini.dir/home/frederik/packages/midas/drivers/class/hv.cxx.o
> [ 37%] Building CXX object CMakeFiles/sc_fe_mini.dir/home/frederik/packages/midas/drivers/class/multi.cxx.o
> [ 50%] Building CXX object CMakeFiles/sc_fe_mini.dir/home/frederik/packages/midas/drivers/device/nulldev.cxx.o
> [ 62%] Building CXX object CMakeFiles/sc_fe_mini.dir/home/frederik/packages/midas/drivers/bus/null.cxx.o
> [ 75%] Building CXX object CMakeFiles/sc_fe_mini.dir/home/frederik/packages/midas/drivers/device/mscbdev.cxx.o
> [ 87%] Building CXX object CMakeFiles/sc_fe_mini.dir/home/frederik/packages/midas/mscb/src/mscb.cxx.o
> [100%] Linking CXX executable sc_fe_mini
> /home/frederik/packages/midas/build/libmfe.a(mfe.cxx.o): In function `_readout_thread':
> /home/frederik/packages/midas/src/mfe.cxx:1271: undefined reference to `poll_event(int, int, unsigned int)'
> /home/frederik/packages/midas/build/libmfe.a(mfe.cxx.o): In function `check_polled_events':
> /home/frederik/packages/midas/src/mfe.cxx:1601: undefined reference to `poll_event(int, int, unsigned int)'
> /home/frederik/packages/midas/src/mfe.cxx:1643: undefined reference to `poll_event(int, int, unsigned int)'
> /home/frederik/packages/midas/build/libmfe.a(mfe.cxx.o): In function `readout_enable(unsigned int)':
> /home/frederik/packages/midas/src/mfe.cxx:1158: undefined reference to `interrupt_configure(int, int, long)'
> /home/frederik/packages/midas/src/mfe.cxx:1156: undefined reference to `interrupt_configure(int, int, long)'
> /home/frederik/packages/midas/build/libmfe.a(mfe.cxx.o): In function `initialize_equipment':
> /home/frederik/packages/midas/src/mfe.cxx:614: undefined reference to `interrupt_configure(int, int, long)'
> /home/frederik/packages/midas/src/mfe.cxx:649: undefined reference to `poll_event(int, int, unsigned int)'
> /home/frederik/packages/midas/build/libmfe.a(mfe.cxx.o): In function `scheduler':
> /home/frederik/packages/midas/src/mfe.cxx:1890: undefined reference to `poll_event(int, int, unsigned int)'
> /home/frederik/packages/midas/src/mfe.cxx:1932: undefined reference to `poll_event(int, int, unsigned int)'
> /home/frederik/packages/midas/build/libmfe.a(mfe.cxx.o): In function `main':
> /home/frederik/packages/midas/src/mfe.cxx:2701: undefined reference to `interrupt_configure(int, int, long)'
> /home/frederik/packages/midas/src/mfe.cxx:2702: undefined reference to `interrupt_configure(int, int, long)'
> collect2: error: ld returned 1 exit status
> make[2]: *** [sc_fe_mini] Error 1
> make[1]: *** [CMakeFiles/sc_fe_mini.dir/all] Error 2
> make: *** [all] Error 2
>
>
> This is my cmakelists for my user code:
>
> #
> # cmake for the muX software
> #
> cmake_minimum_required(VERSION 3.3)
>
> project(muX)
>
> #
> # find installations
> #
> set(MIDAS_DIR $ENV{MIDASSYS})
> message("MIDAS dir: " ${MIDAS_DIR})
>
> #
> # set directories
> #
> set(MIDASBUILD_DIR ${MIDAS_DIR}/build)
> set(MIDASINCLUDE_DIR ${MIDAS_DIR}/include)
> set(MXML_DIR ${MIDAS_DIR}/mxml)
> set(MSCB_DIR ${MIDAS_DIR}/mscb)
> set(DRV_DIR ${MIDAS_DIR}/drivers)
>
>
> #
> # drivers, libs
> #
> set(DRIVERS
> ${MIDAS_DIR}/drivers/class/hv
> ${MIDAS_DIR}/drivers/class/multi
> ${MIDAS_DIR}/drivers/device/nulldev
> ${MIDAS_DIR}/drivers/bus/null
> )
> set(MIDASLIB ${MIDASBUILD_DIR}/libmidas.a)
> set(FELIB ${MIDASBUILD_DIR}/libmfe.a)
>
> #
> # sc_fe
> #
> add_executable(sc_fe_mini
> sc_fe_mini.cpp
> ${DRIVERS}
> ${MIDAS_DIR}/drivers/device/mscbdev
> ${MIDAS_DIR}/mscb/src/mscb)
>
> target_include_directories(sc_fe_mini PRIVATE ${DRV_DIR} ${MIDAS_DIR}/mscb/include ${MIDAS_DIR}/include)
> target_link_libraries(sc_fe_mini ${LIBS} ${MIDASLIB} ${FELIB} rt pthread util)
>
>
>
> I seem to be able to compile the current midas distributions, including the scfe frontend
>
>
>
> > To convert a MIDAS frontend to C++ follow this checklist:
> >
> > a) add #include "mfe.h" after include of midas.h and fix all compilation errors.
> >
> > NOTE: there should be no "extern C" brackets around MIDAS include files.
> >
> > NOTE: Expect to see following problems:
> >
> > a1) duplicate or mismatched declarations of functions defined in mfe.h
> > a2) frontend_name and frontend_file_name should be "const char*" instead of "char*"
> > a3) duplicate "HNDLE hDB" collision with hDB from mfe.c - not sure why it worked before, either use HNDLE hDB from mfe.h or use "extern HNDLE hDB".
> > a4) poll_event() and interrupt_configure() have "source" as "int[]" instead of "int" (why did this work before?)
> > a5) use of "extern int frontend_index" instead of get_frontend_index() from mfe.h
> > a6) bk_create() last argument needs to be cast to (void**)
> > a7) "bool debug" collides with "debug" from mfe.h (why did this work before?)
> >
> > b) remove no longer needed "extern C" brackets around mfe related code. Ideally there should be no "extern C" brackets anywhere.
> >
> > c) in the Makefile, change CC=gcc to CC=g++ for compiling and linking everything as C++
> >
> > c1) fix all compilation problems. most valid C code will compile as valid C++, but there is some known trouble:
> > - return value of malloc() & co needs to be cast to the correct data type: "char* s = (char*)malloc(...)"
> > - some C++ compilers complain about mismatch between signed and unsigned values
> >
> > If you need help with converting your frontend from C to C++, I will be most happy
> > to assist you - post your compiler error messages to this forum or email them to me privately.
> >
> > Good luck,
> > K.O. |
51
|
22 Jun 2004 |
Exaos Lee | | How to compile under Darwin-gcc? (MacOS X) | I add the following to makefile and try to treat Darwin as FreeBSD/Linux.
But I failed.
============
#-----------------------
# This is for MacOS X
#
ifeq ($(OSTYPE), Darwin)
CC = gcc
OS_DIR = Darwin
OSFLAGS = -DOS_DARWIN -DOS_LINUX
LIBS = -lbsd -lcompat
SPECIFIC_OS_PRG =
endif
============
I got the following errors:
=============
gcc -c -g -O2 -Wall -Iinclude -Idrivers -LDarwin/lib -DINCLUDE_FTPLIB
-DOS_DARWIN -DOS_FREEBSD -o Darwin/lib/midas.o src/midas.c
In file included from include/midasinc.h:45,
from include/msystem.h:114,
from src/midas.c:623:
/usr/include/string.h:112: error: conflicting types for `strlcat'
include/midas.h:1701: error: previous declaration of `strlcat'
/usr/include/string.h:113: error: conflicting types for `strlcpy'
include/midas.h:1700: error: previous declaration of `strlcpy'
In file included from include/msystem.h:114,
from src/midas.c:623:
include/midasinc.h:161:21: sys/vfs.h: No such file or directory
include/midasinc.h:164:17: pty.h: No such file or directory
src/midas.c:780: error: conflicting types for `dbg_malloc'
include/midas.h:1478: error: previous declaration of `dbg_malloc'
src/midas.c:817: error: conflicting types for `dbg_calloc'
include/midas.h:1479: error: previous declaration of `dbg_calloc'
src/midas.c:858: error: conflicting types for `strlcpy'
/usr/include/string.h:113: error: previous declaration of `strlcpy'
src/midas.c:892: error: conflicting types for `strlcat'
/usr/include/string.h:112: error: previous declaration of `strlcat'
gmake: *** [Darwin/lib/midas.o] Error 1
==========
Could anyone give me some hints. Thanks! |
52
|
22 Jun 2004 |
Konstantin Olchanski | | How to compile under Darwin-gcc? (MacOS X) | The current (cvs) version of MIDAS should build on Mac OS X right out of the
box- I fixed all the problems you report back in February(?)- see the macosx
thread in this forum. A few weeks ago I verified that it still compiles on Mac
OS 10.3.4. The Mac OS port received minimal testing- I checked that "odbedit"
and "mhttpd" run, that's about it. K.O.
> I add the following to makefile and try to treat Darwin as FreeBSD/Linux.
> But I failed.
> ============
> #-----------------------
> # This is for MacOS X
> #
> ifeq ($(OSTYPE), Darwin)
> CC = gcc
> OS_DIR = Darwin
> OSFLAGS = -DOS_DARWIN -DOS_LINUX
> LIBS = -lbsd -lcompat
> SPECIFIC_OS_PRG =
> endif
> ============
>
> I got the following errors:
> =============
> gcc -c -g -O2 -Wall -Iinclude -Idrivers -LDarwin/lib -DINCLUDE_FTPLIB
> -DOS_DARWIN -DOS_FREEBSD -o Darwin/lib/midas.o src/midas.c
> In file included from include/midasinc.h:45,
> from include/msystem.h:114,
> from src/midas.c:623:
> /usr/include/string.h:112: error: conflicting types for `strlcat'
> include/midas.h:1701: error: previous declaration of `strlcat'
> /usr/include/string.h:113: error: conflicting types for `strlcpy'
> include/midas.h:1700: error: previous declaration of `strlcpy'
> In file included from include/msystem.h:114,
> from src/midas.c:623:
> include/midasinc.h:161:21: sys/vfs.h: No such file or directory
> include/midasinc.h:164:17: pty.h: No such file or directory
> src/midas.c:780: error: conflicting types for `dbg_malloc'
> include/midas.h:1478: error: previous declaration of `dbg_malloc'
> src/midas.c:817: error: conflicting types for `dbg_calloc'
> include/midas.h:1479: error: previous declaration of `dbg_calloc'
> src/midas.c:858: error: conflicting types for `strlcpy'
> /usr/include/string.h:113: error: previous declaration of `strlcpy'
> src/midas.c:892: error: conflicting types for `strlcat'
> /usr/include/string.h:112: error: previous declaration of `strlcat'
> gmake: *** [Darwin/lib/midas.o] Error 1
> ==========
>
> Could anyone give me some hints. Thanks! |
53
|
23 Jun 2004 |
Exaos Lee | | How to compile under Darwin-gcc? (MacOS X) | > The current (cvs) version of MIDAS should build on Mac OS X right out of the
> box- I fixed all the problems you report back in February(?)- see the macosx
> thread in this forum. A few weeks ago I verified that it still compiles on Mac
> OS 10.3.4. The Mac OS port received minimal testing- I checked that "odbedit"
> and "mhttpd" run, that's about it. K.O.
>
Thanks a lot. But I cannot checkout module:
------------
01:52:16: pc2075.psi.ch: Operation timed out
01:52:16: cvs [checkout aborted]: end of file from server (consult above
messages if any)
------------
Could anybody add download tar package in the WWW interface of CVS repository.
I know the original CGI script has such a feature. Thanks.
P.S.
I use these commands to checkout:
cvs -e ssh -d :ext:cvs@midas.psi.ch:/usr/local/cvsroot checkout midas
cvs -e ssh -d :ext:cvs@midas.psi.ch:/usr/local/cvsroot update |
54
|
23 Jun 2004 |
Stefan Ritt | | How to compile under Darwin-gcc? (MacOS X) | > Thanks a lot. But I cannot checkout module:
> ------------
> 01:52:16: pc2075.psi.ch: Operation timed out
> 01:52:16: cvs [checkout aborted]: end of file from server (consult above
> messages if any)
> ------------
Should work fine, just tried from outside PSI. Please check again.
> Could anybody add download tar package in the WWW interface of CVS repository.
> I know the original CGI script has such a feature. Thanks.
The tar package is only done for a new release (which will happen in the next days
BTW), so http://midas.psi.ch/download/tar/ contains the most recent packages. Upon
request I make a midas-snapshot.tar.gz, but since there will be a 1.9.4 soon, it's
maybe not necessary right now. |
55
|
23 Jun 2004 |
Exaos Lee | | How to compile under Darwin-gcc? (MacOS X) | >
> Should work fine, just tried from outside PSI. Please check again.
Unfortunately, I still encounter the same problem.
---
pc2075.psi.ch: Operation timed out
cvs [checkout aborted]: end of file from server (consult above messages if any)
---
I am in LNS-INFN (Italy), i.e., I am outside PSI. So ... what's the problem? I try to
ping the host, and it is reachable:
--------
[exaos@exaos cvsnew]$ ping midas.psi.ch
PING pc2075.psi.ch (129.129.228.23): 56 data bytes
64 bytes from 129.129.228.23: icmp_seq=0 ttl=50 time=67.237 ms
64 bytes from 129.129.228.23: icmp_seq=1 ttl=50 time=64.202 ms
64 bytes from 129.129.228.23: icmp_seq=2 ttl=50 time=56.278 ms
...
--------
Is it the problem of firewall? I am not sure. So strange.
>
> The tar package is only done for a new release (which will happen in the next days
> BTW), so http://midas.psi.ch/download/tar/ contains the most recent packages. Upon
> request I make a midas-snapshot.tar.gz, but since there will be a 1.9.4 soon, it's
> maybe not necessary right now.
Waiting for the new release ... |
754
|
30 Mar 2011 |
Exaos Lee | Forum | How large does "bank32" support? | Reading an FADC buffer often needs large buffer size, especially while several
FADCs work together. I want to know how large a bank32 can support. |
755
|
30 Mar 2011 |
Stefan Ritt | Forum | How large does "bank32" support? | > Reading an FADC buffer often needs large buffer size, especially while several
> FADCs work together. I want to know how large a bank32 can support.
The "32" in bank32 means 32-bits, so the bank holds 2^32=4 GBytes, hope that is enough for your ADC.
The "normal" bank has only a 16-bit header, so it can hold only 64 kBytes. But for small banks, the overhead
is therefore smaller. |
757
|
15 Apr 2011 |
Konstantin Olchanski | Forum | How large does "bank32" support? | > Reading an FADC buffer often needs large buffer size, especially while several
> FADCs work together. I want to know how large a bank32 can support.
Limitations in order:
- bank32 size is limited to a 32 bit integer size (about 4000 Gbytes)
- bank size is limited by event size
- event size in a midas mfe.c based frontend is limited to the value of
max_event_size set by the user
- maximum event size that can go through the MIDAS event buffer system is limited
to ODB value /Experiment/MAX_EVENT_SIZE (MAX_EVENT_SIZE in midas.h does not do
anything now)
- maximum event size is limited to *half* the size of the SYSTEM shared memory
event buffer (or any other buffers that the event has to go through)
- default size of the SYSTEM buffer is 8 Mbytes set by ODB /Experiment/Buffer
sizes/SYSTEM. This limits maximum event size to about 4 Mbytes.
- size of SYSTEM buffer can be increased to arbitrary size, but in practice no
bigger than the amount of computer physical memory minus space needed for running
the frontend program and the mlogger, which also allocate buffer space to hold 1
event of maximum size.
So for a computer with 8 Gbytes of RAM, you can make the SYSTEM buffer size 4
GBytes, set ODB MAX_EVENT_SIZE to 1 Gbyte, and in theory, you should be able to
write 1 Gbyte events from your frontend to disk.
In practice, I think the biggest events I have seen go through a MIDAS system are
non-compressed waveforms in the T2K/ND280 FGD and TPC detectors, about 4 Mbytes of
data per event.
Other considerations (rules of thumb):
1) the SYSTEM event buffer should be big enough to hold 10-100 events.
2) the SYSTEM event buffer should be big enough to hold about 5-10 seconds of data
- i.e. if your event size is 1 Mbyte and data rate is 1 Hz, 10 seconds of data
will be 1Mbyte*1Hz*10sec = 10 Mbytes.
This is because the SYSTEM buffer decouples the real-time activity of the frontend
program from non-real-time activity of writing data to storage devices.
K.O. |
239
|
22 Dec 2005 |
Konstantin Olchanski | Info | How do I do custom event building? | It turns out the the standard event builder fragment matching algorithm cannot
be used in my TPC application. I have two TPC-USB interfaces, which lack any
"busy" or synchronization logic. I send the hardware trigger into both
interfaces, and if one of them misses it, the data is out of sync forever. Consider:
Hardware
trigger trig1 trig2 trig3 trig4
TPC01 serial1 serial2 serial3 serial4
TPC02 serial1 (missing) serial2 serial3
With the event builder matching only the event serial numbers, the first event
will be okey, but the second event will have trig2 data from TPC01 and trig3
data from TPC02, etc.
The problem exists even if the TPC-USB interfaces do not miss any triggers:
during begin and end of run, the interfaces are enabled one at a time, so if a
trigger arrives after the first interface was enabled, but before the second is
enabled, the data starts being out of sync (and if the same happens during the
end-of-run, the event counts from both frontends will match, but all data would
*still* be out of sync).
Obviously additional data is needed to match the fragments.
So in each frontend, I have a high-precision timestamp (gettimeofday(), usec
resolution) and I would like to have the event builder match the timestamps
instead of event serial numbers. What is the best way to do this? The mevb.c
code does not have any user callbacks for checking "do these fragments belong to
the same event?".
P.S. The event rate will be about 1/sec from cosmic ray tests and at most
10-50/sec in the M11 beam line at TRIUMF, at these low rates, the gettimeofday()
timestamps should be adequate.
K.O. |
241
|
23 Dec 2005 |
Stefan Ritt | Info | How do I do custom event building? | > It turns out the the standard event builder fragment matching algorithm cannot
> be used in my TPC application. I have two TPC-USB interfaces, which lack any
> "busy" or synchronization logic. I send the hardware trigger into both
> interfaces, and if one of them misses it, the data is out of sync forever. Consider:
>
> Hardware
> trigger trig1 trig2 trig3 trig4
> TPC01 serial1 serial2 serial3 serial4
> TPC02 serial1 (missing) serial2 serial3
>
> With the event builder matching only the event serial numbers, the first event
> will be okey, but the second event will have trig2 data from TPC01 and trig3
> data from TPC02, etc.
Well, I would say: this is a very poor design of an experiment. Before curing the
problems in software, I first would consider a redesign of the data readout scheme with
a global hardware trigger and a hardware busy.
> So in each frontend, I have a high-precision timestamp (gettimeofday(), usec
> resolution) and I would like to have the event builder match the timestamps
> instead of event serial numbers.
What do you do if the frontend clock drifts away? I have seen drifts of up to 10 sec/day
on some PCs, so your required accuracy of 1/50 s would be violated after 3 minutes. You
would have to synchronize your clocks constantly. If your synchronization algorithm
determines a clock is out of sync and adjusts it, and the delta t is more than 1/50 sec,
you are screwed.
So all together I conclude that this proposed synchronization scheme is pretty dangerous
and could ruin the whole experiment.
> What is the best way to do this? The mevb.c
> code does not have any user callbacks for checking "do these fragments belong to
> the same event?".
Pierre can answer that.
- Stefan |
248
|
03 Jan 2006 |
John O'Donnell | Info | How do I do custom event building? | At DANCE we have a similar issue. We are still doing "software
handshaking" between multiple frontends (15 which read data, and 16th
with direct accessto the trigger logic), and we apply a time stamp
using gettimeofday(). We use the regular mevb, sorting on serial number.
In the analyzer (MIDAS or ROME) we then keep a big circular buffer of
event fragments, which are rebuilt into new events based on the time stamp
obtained from gettimeofday(). We keep the system clocks synchronized
(often to within about 1ms) using ntp (need to average over several
ntp servers to avoid issues with network noise). ntp can take a while
to stabilize, so we never reboot our computers... (well almost never).
We have a slow control frontend which monitors the ntp time offsets and
put's them in the history system for easy visualization.
Occasionally we seem to get in a mess, but somehow this fixes itself on
the next run, so it has been a useable system. Maybe one day we will
get hardware handshaking between the frontend computers and the trigger
logic, but in the meantime we are taking data.
John. |
1022
|
14 Oct 2014 |
Konstantin Olchanski | Bug Report | Hostile network scans against MIDAS RPC ports | At CERN I see a large number of hostile network scans that seem to be injecting HTTP requests into the
MIDAS RPC ports. So far, all these requests seem to be successfully rejected without crashing anything, but
they do clog up midas.log.
The main problem here is that all MIDAS programs have at least one TCP socket open where they listen for
RPC commands, such as "start of run", "please shutdown", etc. The port numbers of these sockets are
randomized and that makes them difficult to protect them with firewall rules (firewall rules like fixed port
numbers).
Note that this is different from the hostile network scans that I have first seen maybe 5 years ago that
affected the mserver main listener socket. Then, as a solution, I hardened the RPC receiver code against
bad data (and happy to see that this hardening is still holding up) and implemented the mserver "-A"
command switch to specify a list of permitted peers. Also mserver uses a fixed port number ("-p" switch)
and is easy to protect with firewall rules.
Since these ports cannot be protected by OS means (firewall, etc), we have to protect them in MIDAS.
One solution is to reject all connections from unauthorized peers.
One way to use this is to implement the "-A" switch to explicitely list all permitted peers, these switch will
ave to be added to all long running midas programs (mhttpd, mlogger, mfe.c, etc). Not very practical, IMO.
Another way is to read the list of permitted peers from ODB, at startup time, or each time a new connection
is made.
In the latter case, care needs to be taken to avoid deadlocks. For example remote programs that read ODB
through the mserver may deadlock if the same mserver is the one trying to establish the RPC connection.
Or if ODB is somehow locked.
NB - we already keep a list of permitted peers in ODB /Experiment/Security.
K.O. |
1025
|
14 Oct 2014 |
Stefan Ritt | Bug Report | Hostile network scans against MIDAS RPC ports | Doing this through the ODB seems ok to me. If the ODB cannot be accessed, you can fall back to no protection.
At PSI we fortunately do not have these network scans because PSI uses a institute-wide firewall. So you can connect from outside PSI to inside PSI only
on certain well-defined ports (like SSH to certain machines). You can do the same in Alpha. Use one computer as a router with two network cards, where
the DAQ network runs on the second card as a private network. Then program the routing tables in that gateway such that only certain ports can be
accessed from outside, like port 8080 to mhttpd. This way you block all except the things which are needed.
/Stefan |
1031
|
16 Oct 2014 |
Konstantin Olchanski | Bug Report | Hostile network scans against MIDAS RPC ports | > Doing this through the ODB seems ok to me. If the ODB cannot be accessed, you can fall back to no protection.
>
> At PSI we fortunately do not have these network scans because PSI uses a institute-wide firewall.
>
Same here at TRIUMF, no problems with hostile network activity. Only see this trouble at CERN. Nominally CERN also have
everything behind the CERN firewall, that is why I tend to think that I am seeing network scans done by CERN security people,
or some badniks on the CERN local network (PC malware, etc).
> So you can connect from outside PSI to inside PSI only
> on certain well-defined ports (like SSH to certain machines). You can do the same in Alpha. Use one computer as a router with two network cards, where
> the DAQ network runs on the second card as a private network. Then program the routing tables in that gateway such that only certain ports can be
> accessed from outside, like port 8080 to mhttpd. This way you block all except the things which are needed.
Yes, this is how we did it for DEAP at SNOLAB. No network trouble there.
But generically for MIDAS, I think we should have built-in capability for MIDAS to protect itself without reliance on OS-level means (local firewall)
or network-level means ("site firewalls").
Sometimes we have very small MIDAS installations, i.e. just one machine by itself, and such setups should be secure/secured easily -
too much work to setup an external firewall box just for one machine and OS-level firewall rules sometimes conflict
with some OS services (i.e. NIS) (I am still waiting for the "NIS to LDAP migration for dummies" guide).
K.O. |
1032
|
16 Oct 2014 |
Stefan Ritt | Bug Report | Hostile network scans against MIDAS RPC ports | > Sometimes we have very small MIDAS installations, i.e. just one machine by itself, and such setups should be secure/secured easily -
> too much work to setup an external firewall box just for one machine and OS-level firewall rules sometimes conflict
> with some OS services (i.e. NIS) (I am still waiting for the "NIS to LDAP migration for dummies" guide).
I fully agree with you. So if you find time to implement this, I will be more than happy.
/Stefan |
1634
|
26 Jul 2019 |
Nik Berger | Bug Report | History/Endianness | Hi,
I have a bank of floats with slow control values that I store to the history and
ODB. When reading the history, both in the webbrowser and with mhist, the floats
get read with the wrong endianness; under /equipment/variables in the ODB they
however display correctly. System is a an intel OpenSuse linux box. Any ideas?
Thanks
Nik |
2676
|
17 Jan 2024 |
Francesco Renga | Forum | History tags | Dear experts,
I would like to have some clarification about the meaning and use of the
tags in the ODB under /History/Tags.
I noticed that, if a history plot is created, but the name of the corresponding
variable is changed later and the plot is modified accordingly, the old name
persists in the /History/Tags list along with the new one. So, it appears in the
list of variables when a new history plot is created.
It seems not to compromise the functionalities of the history system, but it is
prone to create confusion.
Is it the expected behavior? What is the correct procedure to follow if the name
of a variable has to be changed?
Thank you,
Francesco |
|