Back Midas Rome Roody Rootana
  Midas DAQ System, Page 52 of 142  Not logged in ELOG logo
ID Date Author Topic Subjectdown
  1771   13 Jan 2020 Konstantin OlchanskiForumcmake complie issues
Right. So the problem is mismatch in ROOT compile flags. The old Makefile build used CFLAGS from ROOT to build rmana and rmlogger, cmake uses some kind of generic CFLAGS, so here we have it. No idea how to fix it.

K.O.


> Re: ROOT problem
> 
> I looked into how my ROOT based MIDAS analyzer compiles. It is using the flag -std=c++14.
> MIDAS cmake compiles with -std=gnu++11 with the result:
> 
>  29%] Building CXX object CMakeFiles/rmana.dir/src/mana.cxx.o
> /usr/bin/c++  -D_LARGEFILE64_SOURCE -I/home/pkunz/packages/midas/include -I/home/pkunz/packages/midas/mxml -I/usr/include/root  -O2 -g -DNDEBUG   -DHAVE_ZLIB -DHAVE_FTPLIB -Wall -Wformat=2 -Wno-format-nonliteral -Wno-strict-aliasing -Wuninitialized -Wno-unused-function -DHAVE_ROOT -std=gnu++11 -o CMakeFiles/rmana.dir/src/mana.cxx.o -c /home/pkunz/packages/midas/src/mana.cxx
> In file included from /usr/include/root/TString.h:28,
>                  from /usr/include/root/TCollection.h:29,
>                  from /usr/include/root/TSeqCollection.h:25,
>                  from /usr/include/root/TList.h:25,
>                  from /usr/include/root/TQObject.h:40,
>                  from /usr/include/root/TApplication.h:30,
>                  from /home/pkunz/packages/midas/src/mana.cxx:60:
> /usr/include/root/ROOT/RStringView.hxx:32:37: error: ‘experimental’ in namespace ‘std’ does not name a type
>    32 |    using basic_string_view = ::std::experimental::basic_string_view<_CharT,_Traits>;
> 
> 
> If I change this to
> 
> /usr/bin/c++  -D_LARGEFILE64_SOURCE -I/home/pkunz/packages/midas/include -I/home/pkunz/packages/midas/mxml -I/usr/include/root  -O2 -g -DNDEBUG   -DHAVE_ZLIB -DHAVE_FTPLIB -Wall -Wformat=2 -Wno-format-nonliteral -Wno-strict-aliasing -Wuninitialized -Wno-unused-function -DHAVE_ROOT -std=c++14 -o CMakeFiles/rmana.dir/src/mana.cxx.o -c /home/pkunz/packages/midas/src/mana.cxx
> 
> it compiles without error. I hope this helps. 
> 
> 
> 
> 
> 
> 
> > (please post messages in "plain" mode, they are much easier to answer)
> > 
> > - nvidia problems - this code was contributed by Joseph (I think?), with luck he will look into 
> > this problem.
> > 
> > - ROOT problem - it looks like the error is thrown by the ROOT header files, has nothing to do 
> > with MIDAS?
> > 
> > So what ROOT are you using? I recommend installing ROOT by following instructions at 
> > root.cern.ch.
> > 
> > Perhaps you used the ROOT packages from the EPEL repository? I have seen trouble with 
> > those packages before (miscompiled; important optional features turned off; very old 
> > versions; etc).
> > 
> > P.S.
> > 
> > Historically, ROOT has caused so many reports of "cannot build midas" that I consistently 
> > vote to "remove ROOT support from MIDAS". But Stefan's code for writing MIDAS data into 
> > ROOT files is so neat, cannot throw it away. And some people do use it. So at the latest MIDAS 
> > bash this Summer we decided to keep it.
> > 
> > (Only build targets to use ROOT are the rmlogger executable and the rmana.o object file (and 
> > it's one-man-army library)).
> > 
> > But.
> > 
> > In the past, one could use "make -k" to get past the errors caused by ROOT, everything will 
> > get built and installed, except for the code that failed to build.
> > 
> > Now with cmake, it is "all or nothing", if there is any compilation error, nothing gets installed 
> > into the "bin" directory. So one must discover and use "NO_ROOT=1" (which becomes sticky 
> > until the next "make cclean". Some people are not used to sticky "make" options, I just got 
> > burned by this very thing last week).
> > 
> > Perhaps there is a way to tell cmake to ignore compile errors for rmlogger and rmana.
> > 
> > 
> > K.O.
> > 
> > 
> > 
> > <p>&nbsp;</p>
> > 
> > <table align="center" cellspacing="1" style="border:1px solid #486090; width:98%">
> > 	<tbody>
> > 		<tr>
> > 			<td style="background-color:#486090">Peter Kunz wrote:</td>
> > 		</tr>
> > 		<tr>
> > 			<td style="background-color:#FFFFB0">
> > 			<p>While upgrading to the latest MIDAS version</p>
> > 
> > 			<p>MIDAS version: 2.1 GIT revision: Tue Dec 31 17:40:14 2019 +0100 - midas-
> > 2019-09-i-1-gd93944ce-dirty on branch develop</p>
> > 
> > 			<p>ODB version: 3</p>
> > 
> > 			<p>I encountered two issues using cmake</p>
> > 
> > 			<p>1. on machines with NVIDIA drivers:</p>
> > 
> > 			<div style="background:#eee;border:1px solid #ccc;padding:5px 
> > 10px;">nvml.h: no such file or directory</div>
> > 
> > 			<p>(nvml.h doesn&#39;t seem to be part of the standard nvidia driver 
> > package.)</p>
> > 
> > 			<p>2. Complile including ROOT throws an error with ROOT 6.12/06 on Centos7 
> > and with ROOT 6.18/04 on Fedora 31:</p>
> > 
> > 			<div style="background:#eee;border:1px solid #ccc;padding:5px 10px;">[ 29%] 
> > Building CXX object CMakeFiles/rmana.dir/src/mana.cxx.o In file included from 
> > /usr/include/root/TString.h:28, from /usr/include/root/TCollection.h:29, from 
> > /usr/include/root/TSeqCollection.h:25, from /usr/include/root/TList.h:25, from 
> > /usr/include/root/TQObject.h:40, from /usr/include/root/TApplication.h:30, from 
> > /home/pkunz/packages/midas/src/mana.cxx:60:</div>
> > 
> > 			<div style="background:#eee;border:1px solid #ccc;padding:5px 
> > 10px;">/usr/include/root/ROOT/RStringView.hxx:32:37: error: &lsquo;experimental&rsquo; in 
> > namespace &lsquo;std&rsquo; does not name a type 32 | using basic_string_view = 
> > ::std::experimental::basic_string_view&lt;_CharT,_Traits&gt;;</div>
> > 
> > 			<p>A workaround (which works for me) is to compile with</p>
> > 
> > 			<p><tt>cmake .. -DNO_ROOT=1 -DNO_NVIDIA=1</tt></p>
> > 			</td>
> > 		</tr>
> > 	</tbody>
> > </table>
> > 
> > <p>&nbsp;</p>
  1775   14 Jan 2020 Stefan RittForumcmake complie issues
Thanks for tracing the problem further down. Now I realized that the CMakeLists.txt for the ROOT analyzer did not contain the usual ROOT flags. I added that and committed the change, so please try again. Here is the diff:

examples/experiment/CMakeLists.txt:

       execute_process(COMMAND root-config --incdir OUTPUT_VARIABLE ROOT_INC)
       execute_process(COMMAND root-config --libs OUTPUT_VARIABLE ROOT_LIBS)
+      execute_process(COMMAND root-config --cflags OUTPUT_VARIABLE ROOT_CFLAGS)
       string(STRIP ${ROOT_LIBS} ROOT_LIBS)
-      string(REGEX REPLACE "\n$" "" ROOT_INC "${ROOT_INC}")
+      string(REGEX REPLACE "\n" "" ROOT_INC ${ROOT_INC})
+      separate_arguments(ROOT_CFLAGS UNIX_COMMAND "${ROOT_CFLAGS}")
       set(CMAKE_CXX_STANDARD 11)

       target_include_directories(analyzer PUBLIC ${ROOT_INC} ${INC_PATH})
+      target_compile_options(analyzer PUBLIC ${ROOT_CFLAGS})
       target_link_libraries(analyzer rmana midas ${ROOT_LIBS} ${LIBS})


Best,
Stefan

> Re: ROOT problem
> 
> I looked into how my ROOT based MIDAS analyzer compiles. It is using the flag -std=c++14.
> MIDAS cmake compiles with -std=gnu++11 with the result:
> 
>  29%] Building CXX object CMakeFiles/rmana.dir/src/mana.cxx.o
> /usr/bin/c++  -D_LARGEFILE64_SOURCE -I/home/pkunz/packages/midas/include -I/home/pkunz/packages/midas/mxml -I/usr/include/root  -O2 -g -DNDEBUG   -DHAVE_ZLIB -DHAVE_FTPLIB -Wall -Wformat=2 -Wno-format-nonliteral -Wno-strict-aliasing -Wuninitialized -Wno-unused-function -DHAVE_ROOT -std=gnu++11 -o CMakeFiles/rmana.dir/src/mana.cxx.o -c /home/pkunz/packages/midas/src/mana.cxx
> In file included from /usr/include/root/TString.h:28,
>                  from /usr/include/root/TCollection.h:29,
>                  from /usr/include/root/TSeqCollection.h:25,
>                  from /usr/include/root/TList.h:25,
>                  from /usr/include/root/TQObject.h:40,
>                  from /usr/include/root/TApplication.h:30,
>                  from /home/pkunz/packages/midas/src/mana.cxx:60:
> /usr/include/root/ROOT/RStringView.hxx:32:37: error: ‘experimental’ in namespace ‘std’ does not name a type
>    32 |    using basic_string_view = ::std::experimental::basic_string_view<_CharT,_Traits>;
> 
> 
> If I change this to
> 
> /usr/bin/c++  -D_LARGEFILE64_SOURCE -I/home/pkunz/packages/midas/include -I/home/pkunz/packages/midas/mxml -I/usr/include/root  -O2 -g -DNDEBUG   -DHAVE_ZLIB -DHAVE_FTPLIB -Wall -Wformat=2 -Wno-format-nonliteral -Wno-strict-aliasing -Wuninitialized -Wno-unused-function -DHAVE_ROOT -std=c++14 -o CMakeFiles/rmana.dir/src/mana.cxx.o -c /home/pkunz/packages/midas/src/mana.cxx
> 
> it compiles without error. I hope this helps. 
> 
> 
> 
> 
> 
> 
> > (please post messages in "plain" mode, they are much easier to answer)
> > 
> > - nvidia problems - this code was contributed by Joseph (I think?), with luck he will look into 
> > this problem.
> > 
> > - ROOT problem - it looks like the error is thrown by the ROOT header files, has nothing to do 
> > with MIDAS?
> > 
> > So what ROOT are you using? I recommend installing ROOT by following instructions at 
> > root.cern.ch.
> > 
> > Perhaps you used the ROOT packages from the EPEL repository? I have seen trouble with 
> > those packages before (miscompiled; important optional features turned off; very old 
> > versions; etc).
> > 
> > P.S.
> > 
> > Historically, ROOT has caused so many reports of "cannot build midas" that I consistently 
> > vote to "remove ROOT support from MIDAS". But Stefan's code for writing MIDAS data into 
> > ROOT files is so neat, cannot throw it away. And some people do use it. So at the latest MIDAS 
> > bash this Summer we decided to keep it.
> > 
> > (Only build targets to use ROOT are the rmlogger executable and the rmana.o object file (and 
> > it's one-man-army library)).
> > 
> > But.
> > 
> > In the past, one could use "make -k" to get past the errors caused by ROOT, everything will 
> > get built and installed, except for the code that failed to build.
> > 
> > Now with cmake, it is "all or nothing", if there is any compilation error, nothing gets installed 
> > into the "bin" directory. So one must discover and use "NO_ROOT=1" (which becomes sticky 
> > until the next "make cclean". Some people are not used to sticky "make" options, I just got 
> > burned by this very thing last week).
> > 
> > Perhaps there is a way to tell cmake to ignore compile errors for rmlogger and rmana.
> > 
> > 
> > K.O.
> > 
> > 
> > 
> > <p>&nbsp;</p>
> > 
> > <table align="center" cellspacing="1" style="border:1px solid #486090; width:98%">
> > 	<tbody>
> > 		<tr>
> > 			<td style="background-color:#486090">Peter Kunz wrote:</td>
> > 		</tr>
> > 		<tr>
> > 			<td style="background-color:#FFFFB0">
> > 			<p>While upgrading to the latest MIDAS version</p>
> > 
> > 			<p>MIDAS version: 2.1 GIT revision: Tue Dec 31 17:40:14 2019 +0100 - midas-
> > 2019-09-i-1-gd93944ce-dirty on branch develop</p>
> > 
> > 			<p>ODB version: 3</p>
> > 
> > 			<p>I encountered two issues using cmake</p>
> > 
> > 			<p>1. on machines with NVIDIA drivers:</p>
> > 
> > 			<div style="background:#eee;border:1px solid #ccc;padding:5px 
> > 10px;">nvml.h: no such file or directory</div>
> > 
> > 			<p>(nvml.h doesn&#39;t seem to be part of the standard nvidia driver 
> > package.)</p>
> > 
> > 			<p>2. Complile including ROOT throws an error with ROOT 6.12/06 on Centos7 
> > and with ROOT 6.18/04 on Fedora 31:</p>
> > 
> > 			<div style="background:#eee;border:1px solid #ccc;padding:5px 10px;">[ 29%] 
> > Building CXX object CMakeFiles/rmana.dir/src/mana.cxx.o In file included from 
> > /usr/include/root/TString.h:28, from /usr/include/root/TCollection.h:29, from 
> > /usr/include/root/TSeqCollection.h:25, from /usr/include/root/TList.h:25, from 
> > /usr/include/root/TQObject.h:40, from /usr/include/root/TApplication.h:30, from 
> > /home/pkunz/packages/midas/src/mana.cxx:60:</div>
> > 
> > 			<div style="background:#eee;border:1px solid #ccc;padding:5px 
> > 10px;">/usr/include/root/ROOT/RStringView.hxx:32:37: error: &lsquo;experimental&rsquo; in 
> > namespace &lsquo;std&rsquo; does not name a type 32 | using basic_string_view = 
> > ::std::experimental::basic_string_view&lt;_CharT,_Traits&gt;;</div>
> > 
> > 			<p>A workaround (which works for me) is to compile with</p>
> > 
> > 			<p><tt>cmake .. -DNO_ROOT=1 -DNO_NVIDIA=1</tt></p>
> > 			</td>
> > 		</tr>
> > 	</tbody>
> > </table>
> > 
> > <p>&nbsp;</p>
  1776   14 Jan 2020 Stefan RittForumcmake complie issues
> In the past, one could use "make -k" to get past the errors caused by ROOT, everything will 
> get built and installed, except for the code that failed to build.
> 
> Now with cmake, it is "all or nothing", if there is any compilation error, nothing gets installed 
> into the "bin" directory.

That's not correct. Indeed a "make -k" stops after an error, but "make -i" still works. Try it! Event "make install -i" 
installs all executables into the bin directory which successfully compiled even if there is an error.

Stefan
  2011   06 Nov 2020 Alexandr KozlinskiySuggestioncmake build fixes
hi,

there are several problems with current cmake build files in midas:
- not all systems have cuda libs in /usr/local/cuda
- not all cmake version like when redefining vars
  (i.e. redefining ROOT_CXX_FLAGS)
- c++ standard not matching the one used to build ROOT
- ROOTSYS is not needed to find ROOT (it is enough to have root in PATH)

I have posted pull request 'https://bitbucket.org/tmidas/midas/pull-requests/17'
which tries to fix some of the problems.
Tests and comments are welcome.
  2034   27 Nov 2020 Konstantin OlchanskiSuggestioncmake build fixes
Hi, Alexandr, thank you for making improvements to MIDAS. I have some question
about your suggestions:

> there are several problems with current cmake build files in midas:
> - not all systems have cuda libs in /usr/local/cuda
> - not all cmake version like when redefining vars

we do not see these problems with the normal cmake on our current linux systems,
centos-7 and -8, Ubuntu LTS 18.04, 20.04.

so you have something different? can you be a bit more specific,
which version of cmake and which OS you have so see these troubles?

> - c++ standard not matching the one used to build ROOT
> - ROOTSYS is not needed to find ROOT (it is enough to have root in PATH)

Again ROOT tangles with the build of MIDAS.

MIDAS does not use ROOT. As a convenience to the users, we have a "ROOT output" driver
in mlogger and we build a special executable rmlogger with ROOT. Only this special
executable should be linked with ROOT and compiled with ROOT-specific flags.

The rest of the MIDAS build should not be affected by presence or absence of ROOT.

One would have to read old messages on this forum to understand this situation.

> 
> I have posted pull request 'https://bitbucket.org/tmidas/midas/pull-requests/17'
> which tries to fix some of the problems.
> Tests and comments are welcome.
>

I look at the diffs:

- CUDA detection is changed to "find_package(CUDA)". This code was added by Joseph and Ben, and there 
must be a reason why they did not use find_package(CUDA). They will have to sign-off on this change.

- ROOT related logic assumes that all of MIDAS will be built "the ROOT way". CFLAGS are changed, the C++ 
standard is changed, etc. this assumption is wrong. only rmlogger and rmana should be built "with ROOT".

If you want to follow through on this, I suggest that you split the pull request into two,
one pull request for the CUDA changes and one pull request for the ROOT changes. Also rework
your ROOT changes as I explained above (but also read all ROOT-related messages on this forum).

K.O.
  253   07 May 2006 Konstantin OlchanskiBug Reportcm_register_transition gyrations
I am debugging a Rome-based DAQ system setup by Pierre A. (the system does not
work because of bugs in Rome).

One problem I see is with my copy of cm_register_transition() in midas.c. Rome
calls it with a NULL function to register a "queued" transition, but the
cm_register_transition() code has changed around (rev 3051) to make NULL mean
"unregister" a transition (this broke the queued transitions used by Rome), then
it got changed back (rev 3085). Of course, I was stuck with the broken version,
so Rome did not work at all, and it cost me real wall time to get to the bottom
of all this, only to discover that this problem is already fixed. So-

I would greatly appreciate it if, in the future, changes (and bug fixes) to the
MIDAS API were announced on this mailing list here.

K.O.
  254   08 May 2006 Stefan RittBug Reportcm_register_transition gyrations
> I am debugging a Rome-based DAQ system setup by Pierre A. (the system does not
> work because of bugs in Rome).
> 
> One problem I see is with my copy of cm_register_transition() in midas.c. Rome
> calls it with a NULL function to register a "queued" transition, but the
> cm_register_transition() code has changed around (rev 3051) to make NULL mean
> "unregister" a transition (this broke the queued transitions used by Rome), then
> it got changed back (rev 3085). Of course, I was stuck with the broken version,
> so Rome did not work at all, and it cost me real wall time to get to the bottom
> of all this, only to discover that this problem is already fixed. So-
> 
> I would greatly appreciate it if, in the future, changes (and bug fixes) to the
> MIDAS API were announced on this mailing list here.
> 
> K.O.

Yes you are right. I apologize. Fact was that I was not aware that anybody else uses
already ROME in online mode. Nevertheless, let me at least explain the reason for
that change:

Some experiments at PSI run a slow control front end, which talks to pretty slow
hardware, and thus can be nonresponsive for many seconds. Since each frontend by
default registers in the start and stop transitions, this frontend delayed the start
/stop of each run. To solve this problem in the short run, the frontend should not
register in the transition. Originally I implemented this by using the NULL function
pointer, until we figured out that ROME uses this to register (not de-register)
together with the cm_query_transition() function. Therefore a new function
cm_deregister_transition() was implemented and is used now by the slow frontends.

In the long run this will be solved by implementing multi-threaded frontends which
get one thread for each equipment and therefore do not block any transition anymore.
  187   16 Dec 2004 Jan WoutersForumcm_msg
Could someone please explain to me how cm_msg, cm_msg1, etc. all work.  The
documentation is very terse.  

I want to setup a fairly significant set of debugging, and error messages for a
new frontend.  I need to get these messages to a logging file.  I also would
like to get the error messages to the user through whatever interface Midas
normally uses for error reporting.  

Jan
  188   22 Dec 2004 Stefan RittForumcm_msg
> Could someone please explain to me how cm_msg, cm_msg1, etc. all work.  The
> documentation is very terse.  
> 
> I want to setup a fairly significant set of debugging, and error messages for a
> new frontend.  I need to get these messages to a logging file.  I also would
> like to get the error messages to the user through whatever interface Midas
> normally uses for error reporting.  

For errors, use

  cm_msg(MERROR, "routine_name", "Your error message, code=%d", i);

This produces an error message which is logged to midas.log, and distributed to all
clients which have called cm_msg_register(). For example odbedit will just print
that message. The syntax of the second half of cm_msg is the same as for printf(),
so you can add format specifiers and variable arguments as you do for printf(). The
first argument is the message type (MDEBUG for example is only distributed but not
logged). 

For a more detailed list of message types, please refer to

http://midas.triumf.ca/doc/html/AppendixE.html#midas_macro
  1057   14 May 2015 Konstantin OlchanskiSuggestionchecksums for midas data files
I am adding LZ4 and LZO compression the mlogger and as part of this work, I would like to add 
computation of checksums for the midas files.

On one side, such checksums help me confirm that uncompressed data contents is the same as original 
data (compression/decompression is okey).

On the other side, such checksums can confirm to the end user that today's contents of the midas file is 
the same as originally written by mlogger (maybe years ago) - there was no bit rot, no file corruption, no 
accidental or intentional modification of contents.

There are several choices of checksums available:
crc32 - as implemented by zlib (already written inside mid.gz files)
crc32c - improved and hardware accelerated version of CRC32 (http://tools.ietf.org/html/rfc3309)
md5 - cryptographically strong checksum, but obsolete
sha1 - same, also obsolete
sha256 - currently considered to be cryptographically strong

Of these checksums, only sha256 (sha512, etc) are presently considered to be cryptographically strong,
meaning that they can detect intentional file modifications. As opposed to (for example) crc32 where
it is easy to construct 2 files with different contents but the same checksum. Both md5 and sha1 are 
presently considered to be similarly cryptographically broken. But all of them are still usable
as checksums - as they will detect non-intentional data modifications (bit rot, etc) with
very high probability.

(Of course the strongest checksum is also the most expensive to compute).

I will probably implement crc32 (already in zlib), crc32c (easy to find hardware-accelerated
implementations) and sha256 (cryptographically strong).

I can write the computed checksums into midas.log, or into runNNN.crc32, runNNN.sha256, etc files. (or 
both).

Any thoughts on this?

K.O.
  1058   14 May 2015 Stefan RittSuggestionchecksums for midas data files
> Any thoughts on this?

We use binary midas files now for ~20 years and never felt the necessity to put any checksums or even encryption on these files. The reason for that is the following: Data on 
modern hard disks is already protected by CRC code or even ERC on the lower level, so it's very unlikely that single bytes change. If something happens, then it's a 
corruption of the file system, so a few sectors of a file are missing or wrong. In that case a CRC won't help you much, just tells you that the files are corrupt. But you see that 
also in the midas event structure. Each event has a header with the size of the event, so you can follow the file event by event. If something is missing, the next event header 
is no event header but something in the middle of the date, and you recognise this immediately since the header does not make any send (date is off by many years, event ID 
is arbitrary, event size is very different). So this redundancy in the midas event structure helps you to identify any corrupt files as good in my opinion as a CRC code will. I 
would not want to waste a single CPU cycle on lengthy CRC or even SHA algorithms, unless I see single bytes change inside events. But in this case this can even happen at 
the network level between frontend and backends. So we should add the CRC/SHA code at the frontend level. This could increase the dead time of the experiment which is 
bad. And what about VME transfer? While hard disks and Ethernet networks have already built-in CRC checks, VME transfer doesn't. So how can you be sure that no bits 
get corrupt between your ADC and your frontend computer?

If people insist of having CRC or SHA protection/encryption for some reason I do not understand yet, we should make this optional, so that I can turn it off, since I don't 
need it.

/Stefan
  1059   15 May 2015 Konstantin OlchanskiSuggestionchecksums for midas data files
> > Any thoughts on this?
> 
> We use binary midas files now for ~20 years and never felt the necessity to put any checksums or even encryption on these files ...
>

"I have never seen a corrupted file, therefore nobody should ever need checksums". Well,

1) actually if you write mid.gz files, you get gzip checksums "for free" (but the checksums are not recorded anywhere, so 5 years later you cannot confirm that the file did not change).
2) I had a defective computer once where reading the same file several times yielded different data. (the defect was on the motherboard, not in the disks)
3) I am presently testing the btrfs filesystem which (like ZFS) keeps checksums for all data. For these tests I am using 3rd quality disks and I see btrfs regularly detect (and correct) "data corruption" events - where data on disk has changed.
4) there was a report from CERN(?) where they checked the checksums on a large number of data files and found a good number of corrupted files.

So bit rot does exist.

In more practical terms:

a) CRC32C is "free" to compute (hardware accelerated on latest CPUs), but does not detect malicious file modifications
b) SHA256 does detect that (but for how long?), but probably too expensive to compute (speed measurement TBD).
c) gzip compressed files have internal whole-file CRC32
d) bzip2 compressed files have internal per-block CRC32
e) lz4 compressed files have internal per-block xxhash checksums

Personally, when dealing with compressed files, I prefer to have a checksum recoded somewhere that I can check against after I decompress the file.

I think there is no need to add checksums to the MIDAS data files format itself (see c,d,e above).

K.O.
  1208   05 Oct 2016 Lee PoolSuggestionchecksums for midas data files
Hi

> On one side, such checksums help me confirm that uncompressed data contents is the same as original 
> data (compression/decompression is okey).
> 

> I can write the computed checksums into midas.log, or into runNNN.crc32, runNNN.sha256, etc files. (or 
> both).
> 

Just a thought on my side. I have been using a checksum, on data produced  by our experiments via mlogger, the runxxxx.mid.gz, in 
the same manner you proposed and I see now implemented. 

I have a slight, objection, if I may call it that, to how the checksum is saved to disk, in 
run00007.mid.gz.sha256 as an example.


$ cat ~/Data/run00007.mid.gz.sha256
f315af7caf6ca204cc082132862cb4227d77066cb60c6e2b1039d6dc5b04d1ee 650597 Data/run00007.mid.gz


It seems a little misleading to have the gzip'd filename paired with the checksum of the uncompressed content.

May I suggest that the pairing should be ,

f315af7caf6ca204cc082132862cb4227d77066cb60c6e2b1039d6dc5b04d1ee  run00007.mid as an example.

As I find, this information will sit in an archive, database in my case for a long period, and it might
be confusing later on, when verification of the checksum is required.
  1209   13 Oct 2016 Konstantin OlchanskiSuggestionchecksums for midas data files
Confirmed, this is a bug in mlogger. It should be creating *2* files, one with the before-compression checksum and one with the after-compression checksum. At 
least both checksums are written to midas.log, so you can grep them from there. K.O.

> Hi
> 
> > On one side, such checksums help me confirm that uncompressed data contents is the same as original 
> > data (compression/decompression is okey).
> > 
> 
> > I can write the computed checksums into midas.log, or into runNNN.crc32, runNNN.sha256, etc files. (or 
> > both).
> > 
> 
> Just a thought on my side. I have been using a checksum, on data produced  by our experiments via mlogger, the runxxxx.mid.gz, in 
> the same manner you proposed and I see now implemented. 
> 
> I have a slight, objection, if I may call it that, to how the checksum is saved to disk, in 
> run00007.mid.gz.sha256 as an example.
> 
> 
> $ cat ~/Data/run00007.mid.gz.sha256
> f315af7caf6ca204cc082132862cb4227d77066cb60c6e2b1039d6dc5b04d1ee 650597 Data/run00007.mid.gz
> 
> 
> It seems a little misleading to have the gzip'd filename paired with the checksum of the uncompressed content.
> 
> May I suggest that the pairing should be ,
> 
> f315af7caf6ca204cc082132862cb4227d77066cb60c6e2b1039d6dc5b04d1ee  run00007.mid as an example.
> 
> As I find, this information will sit in an archive, database in my case for a long period, and it might
> be confusing later on, when verification of the checksum is required.
  1247   13 Mar 2017 Konstantin OlchanskiSuggestionchecksums for midas data files
> Confirmed, this is a bug in mlogger. It should be creating *2* files, one with the before-compression checksum and one with the after-compression checksum. At 
> least both checksums are written to midas.log, so you can grep them from there. K.O.

This should be fixed now. Thank you for nudging me.

K.O.



> 
> > Hi
> > 
> > > On one side, such checksums help me confirm that uncompressed data contents is the same as original 
> > > data (compression/decompression is okey).
> > > 
> > 
> > > I can write the computed checksums into midas.log, or into runNNN.crc32, runNNN.sha256, etc files. (or 
> > > both).
> > > 
> > 
> > Just a thought on my side. I have been using a checksum, on data produced  by our experiments via mlogger, the runxxxx.mid.gz, in 
> > the same manner you proposed and I see now implemented. 
> > 
> > I have a slight, objection, if I may call it that, to how the checksum is saved to disk, in 
> > run00007.mid.gz.sha256 as an example.
> > 
> > 
> > $ cat ~/Data/run00007.mid.gz.sha256
> > f315af7caf6ca204cc082132862cb4227d77066cb60c6e2b1039d6dc5b04d1ee 650597 Data/run00007.mid.gz
> > 
> > 
> > It seems a little misleading to have the gzip'd filename paired with the checksum of the uncompressed content.
> > 
> > May I suggest that the pairing should be ,
> > 
> > f315af7caf6ca204cc082132862cb4227d77066cb60c6e2b1039d6dc5b04d1ee  run00007.mid as an example.
> > 
> > As I find, this information will sit in an archive, database in my case for a long period, and it might
> > be confusing later on, when verification of the checksum is required.
  700   08 Jun 2010 nicholasForumcheck out from svn
do: svn co svn+ssh://svn@savannah.psi.ch/afs/psi.ch/project/meg/svn/midas/trunk
midas
shows: ssh: connect to host savannah.psi.ch port 22: Connection timed out
svn: Connection closed unexpectedly
  701   08 Jun 2010 nicholasForumcheck out from svn
> do: svn co svn+ssh://svn@savannah.psi.ch/afs/psi.ch/project/meg/svn/midas/trunk
> midas
> shows: ssh: connect to host savannah.psi.ch port 22: Connection timed out
> svn: Connection closed unexpectedly

sorry, my side network problem.
N.
  1356   19 Mar 2018 Andreas SuterSuggestioncheck current ODB size
A feature I always missed (or just missed to find in the docu) is the following:
1) It would be nice to have a command in odbedit which allows to check how full
the ODB currently is.
2) Even more important: I would like to have an ODB routine which allows me to
check the fill level of the ODB, and/or a routine which tells me if I would
create a structure of given size that it still fits in the current ODB or not. 
The use case is that some clients create on the fly ODB entries and I would like
to make sure before hand the ODB's remaining space in order not to crash things
by overfilling the ODB. 
  1357   19 Mar 2018 Stefan RittSuggestioncheck current ODB size
> A feature I always missed (or just missed to find in the docu) is the following:
> 1) It would be nice to have a command in odbedit which allows to check how full
> the ODB currently is.
> 2) Even more important: I would like to have an ODB routine which allows me to
> check the fill level of the ODB, and/or a routine which tells me if I would
> create a structure of given size that it still fits in the current ODB or not. 
> The use case is that some clients create on the fly ODB entries and I would like
> to make sure before hand the ODB's remaining space in order not to crash things
> by overfilling the ODB. 

If you do "mem" in odbedit, you see the currently free areas, one for the keys themselves, one for 
the data of the keys. The corresponding C function is db_show_mem. At the moment it outputs the 
list of free blocks as a long ASCII string, but if necessary I can write a variant which returns the 
number of free bytes.

Stefan
  1358   19 Mar 2018 Andreas SuterSuggestioncheck current ODB size
> > A feature I always missed (or just missed to find in the docu) is the following:
> > 1) It would be nice to have a command in odbedit which allows to check how full
> > the ODB currently is.
> > 2) Even more important: I would like to have an ODB routine which allows me to
> > check the fill level of the ODB, and/or a routine which tells me if I would
> > create a structure of given size that it still fits in the current ODB or not. 
> > The use case is that some clients create on the fly ODB entries and I would like
> > to make sure before hand the ODB's remaining space in order not to crash things
> > by overfilling the ODB. 
> 
> If you do "mem" in odbedit, you see the currently free areas, one for the keys themselves, one for 
> the data of the keys. The corresponding C function is db_show_mem. At the moment it outputs the 
> list of free blocks as a long ASCII string, but if necessary I can write a variant which returns the 
> number of free bytes.
> 
> Stefan

Thanks for the info, and yes a variant of db_show_mem returning the number of free bytes would be just
prefect!
ELOG V3.1.4-2e1708b5