Back Midas Rome Roody Rootana
  Midas DAQ System  Not logged in ELOG logo
Message ID: 3067     Entry time: 24 Jul 2025
Author: Konstantin Olchanski 
Topic: Bug Fix 
Subject: support for large history files 
FILE history code (mhf_*.dat files) did not support reading history files bigger than about 2GB, this is now 
fixed on branch "feature/history_off64_t" (in final testing, to be merged ASAP).

History files were never meant to get bigger than about 100 MBytes, but it turns out large files can still 
happen:

1) files are rotated only when history is closed and reopened
2) we removed history close and open on run start
3) so files are rotated only when mlogger is restarted

In the old code, large files would still happen if some equipment writes a lot of data (I have a file from 
Stefan with history record size about 64 kbytes, written at 1/second, MIDAS handles this just fine) or if 
there is no runs started and stopped for a long time.

There are reasons for keeping file size smaller:

a) I would like to use mmap() to read history files, and mmap() of a 100 Gbyte file on a 64 Gbyte RAM 
machine would not work very well.
b) I would like to implement compressed history files and decompression of a 100 Gbyte file takes much 
longer than decompression of a 100 Mbyte file. it is better if data is in smaller chunks.

(it is easy to write a utility program to break-up large history files into smaller chunks).

Why use mmap()? I note that the current code does 1 read() syscall per history record (it is much better to 
read data in bigger chunks) and does multiple seek()/read() syscalls to find the right place in the history 
file (plays silly buggers with the OS read-ahead and data caching). mmap() eliminates all syscalls and has 
the potential to speed things up quite a bit.

K.O.
ELOG V3.1.4-2e1708b5