Back Midas Rome Roody Rootana
  Midas DAQ System, Page 11 of 139  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
ID Date Author Topic Subject
  2589   17 Aug 2023 Stefan RittBug Reportmidas wants to show notification?
> > This feature was asked by some people ...
> 
> "show notifications" popups are strongly associated with disreputable web sites (presumably to 
> push spam), it was surprising to see it from midas.
> 
> K.O.

I agree. But unlike emails (where you get lots of spam as well), you can nicely blacklist/whitelist 
desktop notifications. I suppress all of them except the one for MIDAS. This allows me to watch our 
experiment without staring on the web page all the time.

The main question here is maybe if the desktop notification should be on or off by default (for a 
fresh browser). While you always can change that via the mhttpd "Config" page, the default value is 
chosen by the system. I thought I put it to "on" so people can experience it, and then turn it off if 
they don't like. Having them off by default, most people never would notice this possibility. But I'm 
open to a discussion here.

Stefan
  2588   16 Aug 2023 Stefan RittBug ReportError accessing history files
Tonight we got another error of that type after the update:

04:17 - [mhttpd,ERROR] [history_schema.cxx:2913:FileHistory::read_data,ERROR] Cannot read 
'/data2/history/mhf_1692128214_20230815_gassystem.dat', read() errno 2 (No such file or directory)

This morning I looked at the file, and it was there:

[meg@megon02 history]$ ls -alg mhf_1692128214_20230815_gassystem.dat
-rw-rw-r--. 1 meg 4663228 Aug 17 08:50 mhf_1692128214_20230815_gassystem.dat
[meg@megon02 history]$


Stefan
  2587   16 Aug 2023 Konstantin OlchanskiBug Reportexcessive logging of http requests
> > Our default configuration of apache httpd logs every request.
> > MIDAS custom web pages can easily make a huge number of RPC calls creating a 
> > huge log file and filling system disk to 100% capacity.

added "daily" to /etc/logrotate.d/httpd, default was "weekly", not often enough.

K.O.
  2586   16 Aug 2023 Konstantin OlchanskiBug Reportmidas wants to show notification?
> This feature was asked by some people ...

"show notifications" popups are strongly associated with disreputable web sites (presumably to 
push spam), it was surprising to see it from midas.

K.O.
  2585   16 Aug 2023 Stefan RittBug Reportmidas wants to show notification?
> > I started to get web browser popups about "midas wants to show notifications, 
> > block/allow/x". is this a glitch or a new unannounced/undocumented feature? 
> > google chrome on macos. K.O.
> 
> https://bitbucket.org/tmidas/midas/commits/e101dea764c647211c560a68db7ecda1834198db
> 
> I did not consider this a significant feature to be announced here. Just a few lines 
> of code. You can turn it on/off via the "Config" web page.
> 
> Stefan

Now as I look at it again I realized that the config check boxes had a bug. I fixed that 
and now the disable should work correctly.

This feature was asked by some people who monitor an experiment and have the browser window 
in the background, also have sound off (large office). So desktop notifications are a good 
thing for them.

Stefan
  2584   16 Aug 2023 Stefan RittBug Reportmidas wants to show notification?
> I started to get web browser popups about "midas wants to show notifications, 
> block/allow/x". is this a glitch or a new unannounced/undocumented feature? 
> google chrome on macos. K.O.

https://bitbucket.org/tmidas/midas/commits/e101dea764c647211c560a68db7ecda1834198db

I did not consider this a significant feature to be announced here. Just a few lines 
of code. You can turn it on/off via the "Config" web page.

Stefan
  2583   16 Aug 2023 Konstantin OlchanskiBug Reportmidas wants to show notification?
I started to get web browser popups about "midas wants to show notifications, 
block/allow/x". is this a glitch or a new unannounced/undocumented feature? 
google chrome on macos. K.O.
  2582   15 Aug 2023 Konstantin OlchanskiInfomlogger update
A bit of update to the mlogger. In preparation for more cleanup when Stefan is 
here at TRIUMF.

1) fix overwrite of existing files if run number is reset (check for existing 
files was missing in the LZ4, BZ2 & co data path)
2) made output files read-only (midas, json and checksum files)
3) commented out the old code paths

Currently active per-channel ODB settings:

Active - enable or disable mlogger channel
Type - NOT USED
Filename - output filename template, %d are replaced by run number and subrun 
number, also pipe command for PIPE output
Format - NOT USED
Compression - NOT USED
ODB dump - enable/disable writing ODB dump to data file
ODB dump format - "json" is recommended for new experiments
Log messages - write log messages to output file, 0=off, -1=write all messages
Buffer - "SYSTEM" read events from this event buffer
EventID - "-1" for all events
Trigger Mask - "-1" for all events
Event Limit - stop run after so many events
Byte Limit - stop run after so many bytes
Subrun Byte limit - switch to next subrun file after writing so many bytes. 
actual file size is longer than subrun_byte_limit because of ODB dumps.
Tape Capacity - NOT USED
Subdir Format - if not empty, output file name is DIR/SUBDIR/FILENAME, "%" 
format things are expanded by strftime().
Current Filename - updated by mlogger, contains the currently written file name
Data checksum - checksum before compression, use CRC32C for maximum speed, 
SHA512 for maximum security.
File checksum - checksum after compression, CRC32C is good against accidental 
file corruption, SHA512 is cryptographically strong, good against purposeful 
tampering.
Compress - use "lz4" for maximum speed, bzip2 or pbzip2 for maximum compression. 
no compression and gzip are not recommended. (ZFS may apply lz4 compression to 
uncompressed data).
Output - "NULL" do not write anything, "FILE" write to disk, "FTP" write to FTP 
server, "ROOT" write via the mlogger ROOT writer (docs?), "PIPE" pipe data 
through an external command (i.e. for bzip2 compression).
Gzip compression - gzip compression flags (see gzip docs, 1=max speed, 9=max 
compression)
Bzip2 compression - if non-zero, bzip2 compression level (see "bzip2 -h", 1=max 
speed, 9=max compression)
Pbzip2 num cpu - number of CPUs used by parallel bzip2 compression, pbzip2 -p 
flag
Pbzip2 compression - if non-zero, pbzip2 compresison level (see "pbzip2 -h", 
default is 9=max compression)
Pbzip2 options - any additional pbzip2 options, i.e. -l, -m, -p, etc.

Currently active /Logger options:

Data Dir - where to write all output files, if empty, cm_get_path() is used.
Message file date format - not used in mlogger
Message dir - not used in mlogger
Write data - if set to "no", midas file, runlog, etc will not be written.
ODB Dump - at run stop, save odb to disk
ODB Dump File - file name for "ODB Dump" save file. "%d" is replaced by run 
number. "json" format is recommended for new experiments.
ODB Last Dump File - at run start, save ODB to disk. "json" format is 
recommended for new experiments.
Auto restart - run stopped by time limit or event limit is automatically 
restarted
Auto restart delay - wair for some many seconds before restarting the run
Tape message - NOT USED
Run duration - stop the run after so many seconds
Next subrun - change from "no" to "yes" to force mlogger to open a new subrun 
file (should this be per-channel?)
Subrun duration - open new subrun file after so many seconds (should this be 
per-channel?)
History dir - not used in mlogger
Detached transition - "no" use the normal multithreaded transtions 
(recommended), "yes" use mtransition helper to stop and restart runs. sometimes 
files because mtransition is not in the user $PATH or wrong version of 
mtransition is in the user $PATH.

K.O.
  2581   14 Aug 2023 Konstantin OlchanskiBug Reportexcessive logging of http requests
> Our default configuration of apache httpd logs every request.
> MIDAS custom web pages can easily make a huge number of RPC calls creating a 
> huge log file and filling system disk to 100% capacity.

close but no cigar. mhttpd is not running and /var/log got filled to 100% capacity by http error messages. I do not see any apache facility to filter 
error messages, hmm...

-rw-r--r-- 1 root root 1864421376 Aug 14 12:53 ssl_error_log

[Sun Aug 13 23:53:12.416247 2023] [proxy:error] [pid 18608] AH00940: HTTP: disabled connection for (localhost)
[Sun Aug 13 23:53:12.416538 2023] [proxy:error] [pid 19686] AH00940: HTTP: disabled connection for (localhost)
[Sun Aug 13 23:53:12.416603 2023] [proxy:error] [pid 19681] AH00940: HTTP: disabled connection for (localhost)
[Sun Aug 13 23:53:12.416775 2023] [proxy:error] [pid 19588] AH00940: HTTP: disabled connection for (localhost)
[Sun Aug 13 23:53:12.417022 2023] [proxy:error] [pid 19311] AH00940: HTTP: disabled connection for (localhost)
[Sun Aug 13 23:53:12.421864 2023] [proxy:error] [pid 18620] AH00940: HTTP: disabled connection for (localhost)
[Sun Aug 13 23:53:12.422051 2023] [proxy:error] [pid 19693] AH00940: HTTP: disabled connection for (localhost)
[Sun Aug 13 23:53:12.422199 2023] [proxy:error] [pid 19673] AH00940: HTTP: disabled connection for (localhost)
[Sun Aug 13 23:53:12.422222 2023] [proxy:error] [pid 18608] AH00940: HTTP: disabled connection for (localhost)
[Sun Aug 13 23:53:12.422230 2023] [proxy:error] [pid 19657] AH00940: HTTP: disabled connection for (localhost)
[Sun Aug 13 23:53:12.422259 2023] [proxy:error] [pid 18633] AH00940: HTTP: disabled connection for (localhost)
[Sun Aug 13 23:53:12.427513 2023] [proxy:error] [pid 19686] AH00940: HTTP: disabled connection for (localhost)
[Sun Aug 13 23:53:12.427549 2023] [proxy:error] [pid 19681] AH00940: HTTP: disabled connection for (localhost)
[Sun Aug 13 23:53:12.427645 2023] [proxy:error] [pid 19588] AH00940: HTTP: disabled connection for (localhost)
[Sun Aug 13 23:53:12.427774 2023] [proxy:error] [pid 19693] AH00940: HTTP: disabled connection for (localhost)
[Sun Aug 13 23:53:12.427800 2023] [proxy:error] [pid 18620] AH00940: HTTP: disabled connection for (localhost)

K.O.
  2580   09 Aug 2023 Konstantin OlchanskiBug FixStefan's improved ODB flush to disk
This is an important improvement, should have a post of it's own. K.O.

> > > RFE filed:
> > > https://bitbucket.org/tmidas/midas/issues/367/odb-should-be-saved-to-disk-
periodically
> > 
> > Implemented and closed: https://bitbucket.org/tmidas/midas/issues/367/odb-
should-be-saved-to-disk-periodically
> > 
> > Stefan
> 
> Stefan's comments from the closed bug report:
> 
> Ok I implemented some periodic flushing. Here is what I did:
> 
> Created
> 
> /System/Flush/Flush period : TID_UINT32 /System/Flush/Last flush : TID_UINT32
> 
> which control the flushing to disk. The default value for “Flush period” is 60 
seconds or one minute.
> 
> All clients call db_flush_database() through their cm_yield() function
> db_flush_database() checks the “Last flush” and only flushes the ODB when the 
period has expired. This test is 
> done inside the ODB semaphore so that we don’t get a race condigiton
> If the period has expired, db_flush_database() calls ss_shm_flush()
> ss_shm_flush() tries to allocate a buffer of the shared memory. If the 
allocation is not successful (out of 
> memory), ss_shm_flush() writes directly to the binary file as before.
> If the allocation is successful, ss_shm_flush() copies the share memory to a 
buffer and passes this buffer to a 
> dedicated thread which writes the buffer to the binary file. This causes 
ss_shm_flush() to return immediately and 
> not block the calling program during the disk write operation.
> Added back the “if (destroy_flag) ss_shm_flush()” so that the ODB is flushed 
for sure before the shared memory 
> gets deleted.
> This means now that under normal circumstances, exiting programs like odbedit 
do NOT flush the ODB. This allows to 
> call many “odbedit -c” in a row without the flush penalty. Nevertheless, the 
ODB then gets flushed by other 
> clients latest 60 seconds (or whatever the flush period is) after odbedit 
exits.
> 
> Please note that ODB flushing has two purposes:
> 
> When all programs exit, we need a persistent storage for the ODB. In most 
experiments this only happens very 
> seldom. Maybe at the end of a beam time period.
> If the computer crashes, a recent version of the ODB is kept on disk to 
simplify recovery after the crash.
> Since crashes are not so often (during production periods we have maybe one 
hardware failure every few years) the 
> flushing of the ODB too often does not make sense and just consumes resources. 
Flushing does also not help from 
> corrupted ODBs, since the binary image will also get corrupted. So the only 
reason for periodic flushes is to ease 
> recovery after a total crash. I put the default to 60 seconds, but if people 
are really paranoid they can decrease 
> it to 10 seconds or so. Or increase it to 600 seconds if their system does not 
crash every week and disks are 
> slow.
> 
> I made a dedicated branch feature/periodic_odb_flush so people can test the 
new functionality. If there are no 
> complaints within the next few days, I will merge that into develop.
> 
> Stefan
  2578   09 Aug 2023 Konstantin OlchanskiSuggestionMaximum ODB size
> > RFE filed:
> > https://bitbucket.org/tmidas/midas/issues/367/odb-should-be-saved-to-disk-periodically
> 
> Implemented and closed: https://bitbucket.org/tmidas/midas/issues/367/odb-should-be-saved-to-disk-periodically
> 
> Stefan

Stefan's comments from the closed bug report:

Ok I implemented some periodic flushing. Here is what I did:

Created

/System/Flush/Flush period : TID_UINT32 /System/Flush/Last flush : TID_UINT32

which control the flushing to disk. The default value for “Flush period” is 60 seconds or one minute.

All clients call db_flush_database() through their cm_yield() function
db_flush_database() checks the “Last flush” and only flushes the ODB when the period has expired. This test is 
done inside the ODB semaphore so that we don’t get a race condigiton
If the period has expired, db_flush_database() calls ss_shm_flush()
ss_shm_flush() tries to allocate a buffer of the shared memory. If the allocation is not successful (out of 
memory), ss_shm_flush() writes directly to the binary file as before.
If the allocation is successful, ss_shm_flush() copies the share memory to a buffer and passes this buffer to a 
dedicated thread which writes the buffer to the binary file. This causes ss_shm_flush() to return immediately and 
not block the calling program during the disk write operation.
Added back the “if (destroy_flag) ss_shm_flush()” so that the ODB is flushed for sure before the shared memory 
gets deleted.
This means now that under normal circumstances, exiting programs like odbedit do NOT flush the ODB. This allows to 
call many “odbedit -c” in a row without the flush penalty. Nevertheless, the ODB then gets flushed by other 
clients latest 60 seconds (or whatever the flush period is) after odbedit exits.

Please note that ODB flushing has two purposes:

When all programs exit, we need a persistent storage for the ODB. In most experiments this only happens very 
seldom. Maybe at the end of a beam time period.
If the computer crashes, a recent version of the ODB is kept on disk to simplify recovery after the crash.
Since crashes are not so often (during production periods we have maybe one hardware failure every few years) the 
flushing of the ODB too often does not make sense and just consumes resources. Flushing does also not help from 
corrupted ODBs, since the binary image will also get corrupted. So the only reason for periodic flushes is to ease 
recovery after a total crash. I put the default to 60 seconds, but if people are really paranoid they can decrease 
it to 10 seconds or so. Or increase it to 600 seconds if their system does not crash every week and disks are 
slow.

I made a dedicated branch feature/periodic_odb_flush so people can test the new functionality. If there are no 
complaints within the next few days, I will merge that into develop.

Stefan
  2577   09 Aug 2023 Konstantin OlchanskiBug ReportError accessing history files
I confirm I see same on the agmini system. Two problems: (a) error message is wrong, it's a 
short read, not a read error (clue: read() syscall does not return "no such file"). (b) 
mlogger is supposed to write history in record-size blocks, read in the same record size 
blocks. UNIX file semantics require that both reader and writer see read() and write() as 
atomic, even on NFS, so mhttpd should never see partially written history records. I can 
debug this on the agmini system. Probably should.

Problem (a) fixed in commit bb423c8680cc67220312534403840442868f2b3b, if you update, you 
should see error messages about "short read" and the read sizes it reports are very 
interesting, please put them in the elog here.

K.O.


> We sporadically (like once per few hours) have an error message when we access the 
> history plots through mhttpd:
> 
> 07:21:35.109 2023/08/03 [mhttpd,ERROR] 
> [history_schema.cxx:2345:FileHistory::read_data,ERROR] Cannot read 
> '/data2/history/mhf_1690890685_20230801_dc_hv.dat', read() errno 2 (No such file 
> or directory)
> 
> When I log in to the machine, I properly see the file and also can access it
> 
> [meg@megon02 history]$ ls -l mhf_1690890685_20230801_dc_hv.dat
> -rw-rw-r--. 1 meg meg 34176312 Aug  3 07:23 mhf_1690890685_20230801_dc_hv.dat
> 
> and I also can dump that file. 
> 
> When I try again with mhttpd, I properly see that file. 
> 
> Now in principle this is not a problem, but the error message is annoying, since this 
> is the only error we get in 24 hours. I attached a 24h log to see what I mean. If this 
> is an OS issue, I wonder if we should add code to retry the file access in case we get 
> that error.
> 
> Anybody seen a similar thing?
> 
> Best,
> Stefan
  2576   09 Aug 2023 Konstantin OlchanskiForumpull request for PostgreSQL support
> The compilation of midas was broken by the last modification. The reason is that 
>    Pgsql *fPgsql = NULL;
> was not protected by #ifdef HAVE_PGSQL

confirmed, my mistake, I forgot to test with "make cmake NO_PGSQL". your fix is correct, thanks.

K.O.
  2575   04 Aug 2023 Konstantin OlchanskiForumIssues with Universe II Driver
> I can compile 32 bit midas. Unless I am interpreting the linking error, I don't 
> think I can use the driver as built.

I think you are right, Makefile from the Universe package does not build a -m32 version 
of libvme.so. I think I can fix that...

K.O.
  2574   04 Aug 2023 Caleb MarshallForumIssues with Universe II Driver
I can compile 32 bit midas. Unless I am interpreting the linking error, I don't 
think I can use the driver as built. 

While trying to compile vme_scan, most of the programs fail with:

/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-
linux/4.8.5/../../../../lib/libvme.so when searching for -lvme
/usr/bin/ld: skipping incompatible /lib/../lib/libvme.so when searching for -lvme
/usr/bin/ld: skipping incompatible /usr/lib/../lib/libvme.so when searching for -
lvme
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-
linux/4.8.5/../../../libvme.so when searching for -lvme
/usr/bin/ld: skipping incompatible //lib/libvme.so when searching for -lvme
/usr/bin/ld: skipping incompatible //usr/lib/libvme.so when searching for -lvme

with libvme.so being built by the universe-II driver. Not sure if I can get around 
this without messing with the driver? Is it possible to build a 32 bit version of 
that shared library without having to touch the actual kernel module? 

-Caleb
  2573   03 Aug 2023 Konstantin OlchanskiBug Reportexcessive logging of http requests
> > > Our default configuration of apache httpd logs every request. MIDAS custom web pages can easily make a huge number of RPC calls creating a 
> > > huge log file and filling system disk to 100% capacity
> > perhaps use existing logrotate, add limit on file size (size) and limit of 2 old log files (rotate).
>  
> CustomLog logs/ssl_request_log "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" env=!envnolog 
> 

TransferLog is not conditional and has to be commented out to stop logging every jsonrpc request.

K.O.
  2572   03 Aug 2023 Caleb MarshallForumIssues with Universe II Driver
I am looking into compiling the 32 bit midas.

In the meantime, here is the kernel info:

3.10.0-1160.11.1.el7.x86_64 

Thank you for the help.
-Caleb
  2571   03 Aug 2023 Konstantin OlchanskiForumIssues with Universe II Driver
> Here is the output:
> 
> vmic_mmap: Mapped VME AM 0x0d addr 0x00000000 size 0x00ffffff at address 0x80a01000
> mvme_open:
> Bus handle              = 0x3
> DMA handle              = 0x158f5d0
> DMA area size           = 1048576 bytes
> DMA    physical address = 0x7f91db553000
> vmic_mmap: Mapped VME AM 0x2d addr 0x00000000 size 0x0000ffff at address 0x86ff0000
> vme addr: 00000000 
> addr: db543000 

I see the problem. A24 is mapped at 0x80xxxxxx, A16 is mapped at 0x86ffxxxx, but 
mvme_read computed address 0xdb543000, out of range of either mapped vme address. ouch.

One more thing to check, AFAIK, this universe-II codes were never used on 64-bit CPU 
before, we only have 32-bit Pentium-3 and Pentium-4 machines with these chips. The 
tsi148 codes used to work both 32-bit and 64-bit, we used to have both flavours of 
CPUs, but now only have 64-bit.

What is your output for "uname -a"? does it report 32-bit or 64-bit kernel?

If you feel adventurous, you can build 32-bit midas (cd .../midas; make linux32), 
compile vmescan.o with "-m32" and link vmescan.exe against .../midas/linux-m32/lib, and 
see if that works. Meanwhile, I can check if vmicvme.c is 64-bit clean. Checking if 
kernel module is 64-bit clean would be more difficult...

K.O.
  2570   03 Aug 2023 Caleb MarshallForumIssues with Universe II Driver
Here is the output:

vmic_mmap: Mapped VME AM 0x0d addr 0x00000000 size 0x00ffffff at address 0x80a01000
mvme_open:
Bus handle              = 0x3
DMA handle              = 0x158f5d0
DMA area size           = 1048576 bytes
DMA    physical address = 0x7f91db553000
vmic_mmap: Mapped VME AM 0x2d addr 0x00000000 size 0x0000ffff at address 0x86ff0000
vme addr: 00000000 
addr: db543000 
  2569   02 Aug 2023 Stefan RittBug ReportError accessing history files
We sporadically (like once per few hours) have an error message when we access the 
history plots through mhttpd:

07:21:35.109 2023/08/03 [mhttpd,ERROR] 
[history_schema.cxx:2345:FileHistory::read_data,ERROR] Cannot read 
'/data2/history/mhf_1690890685_20230801_dc_hv.dat', read() errno 2 (No such file 
or directory)

When I log in to the machine, I properly see the file and also can access it

[meg@megon02 history]$ ls -l mhf_1690890685_20230801_dc_hv.dat
-rw-rw-r--. 1 meg meg 34176312 Aug  3 07:23 mhf_1690890685_20230801_dc_hv.dat

and I also can dump that file. 

When I try again with mhttpd, I properly see that file. 

Now in principle this is not a problem, but the error message is annoying, since this 
is the only error we get in 24 hours. I attached a 24h log to see what I mean. If this 
is an OS issue, I wonder if we should add code to retry the file access in case we get 
that error.

Anybody seen a similar thing?

Best,
Stefan
Attachment 1: log.txt
07:22:54.488 2023/08/03 [Sequencer,INFO] Run #536882 started
07:22:50.710 2023/08/03 [Sequencer,INFO] Run #536881 stopped
07:21:35.109 2023/08/03 [mhttpd,ERROR] [history_schema.cxx:2345:FileHistory::read_data,ERROR] Cannot read '/data2/history/mhf_1690890685_20230801_dc_hv.dat', read() errno 2 (No such file or directory)
07:16:44.351 2023/08/03 [Sequencer,INFO] Run #536881 started
07:16:40.513 2023/08/03 [Sequencer,INFO] Run #536880 stopped
07:10:34.581 2023/08/03 [Sequencer,INFO] Run #536880 started
07:10:30.594 2023/08/03 [Sequencer,INFO] Run #536879 stopped
07:04:23.783 2023/08/03 [Sequencer,INFO] Run #536879 started
07:04:19.864 2023/08/03 [Sequencer,INFO] Run #536878 stopped
06:57:55.055 2023/08/03 [Sequencer,INFO] Run #536878 started
06:57:50.991 2023/08/03 [Sequencer,INFO] Run #536877 stopped
06:51:41.184 2023/08/03 [Sequencer,INFO] Run #536877 started
06:51:37.611 2023/08/03 [Sequencer,INFO] Run #536876 stopped
06:44:56.595 2023/08/03 [Sequencer,INFO] Run #536876 started
06:44:52.834 2023/08/03 [Sequencer,INFO] Run #536875 stopped
06:38:28.422 2023/08/03 [Sequencer,INFO] Run #536875 started
06:38:24.945 2023/08/03 [Sequencer,INFO] Run #536874 stopped
06:32:08.153 2023/08/03 [Sequencer,INFO] Run #536874 started
06:32:04.586 2023/08/03 [Sequencer,INFO] Run #536873 stopped
06:25:23.687 2023/08/03 [Sequencer,INFO] Run #536873 started
06:25:20.318 2023/08/03 [Sequencer,INFO] Run #536872 stopped
06:19:15.480 2023/08/03 [Sequencer,INFO] Run #536872 started
06:19:11.305 2023/08/03 [Sequencer,INFO] Run #536871 stopped
06:12:52.689 2023/08/03 [Sequencer,INFO] Run #536871 started
06:12:49.075 2023/08/03 [Sequencer,INFO] Run #536870 stopped
06:06:42.901 2023/08/03 [Sequencer,INFO] Run #536870 started
06:06:39.033 2023/08/03 [Sequencer,INFO] Run #536869 stopped
06:00:25.953 2023/08/03 [Sequencer,INFO] Run #536869 started
06:00:22.384 2023/08/03 [Sequencer,INFO] Run #536868 stopped
05:54:13.589 2023/08/03 [Sequencer,INFO] Run #536868 started
05:54:09.719 2023/08/03 [Sequencer,INFO] Run #536867 stopped
05:47:49.328 2023/08/03 [Sequencer,INFO] Run #536867 started
05:47:45.429 2023/08/03 [Sequencer,INFO] Run #536866 stopped
05:41:39.018 2023/08/03 [Sequencer,INFO] Run #536866 started
05:41:35.248 2023/08/03 [Sequencer,INFO] Run #536865 stopped
05:35:25.122 2023/08/03 [Sequencer,INFO] Run #536865 started
05:35:21.542 2023/08/03 [Sequencer,INFO] Run #536864 stopped
05:29:14.937 2023/08/03 [Sequencer,INFO] Run #536864 started
05:29:11.320 2023/08/03 [Sequencer,INFO] Run #536863 stopped
05:22:46.524 2023/08/03 [Sequencer,INFO] Run #536863 started
05:22:42.746 2023/08/03 [Sequencer,INFO] Run #536862 stopped
05:16:33.997 2023/08/03 [Sequencer,INFO] Run #536862 started
05:16:30.422 2023/08/03 [Sequencer,INFO] Run #536861 stopped
05:10:30.602 2023/08/03 [Sequencer,INFO] Run #536861 started
05:10:26.922 2023/08/03 [Sequencer,INFO] Run #536860 stopped
05:04:14.734 2023/08/03 [Sequencer,INFO] Run #536860 started
05:04:10.964 2023/08/03 [Sequencer,INFO] Run #536859 stopped
04:57:48.773 2023/08/03 [Sequencer,INFO] Run #536859 started
04:57:44.994 2023/08/03 [Sequencer,INFO] Run #536858 stopped
04:51:29.976 2023/08/03 [Sequencer,INFO] Run #536858 started
04:51:26.224 2023/08/03 [Sequencer,INFO] Run #536857 stopped
04:44:46.298 2023/08/03 [Sequencer,INFO] Run #536857 started
04:44:41.832 2023/08/03 [Sequencer,INFO] Run #536856 stopped
04:38:32.283 2023/08/03 [Sequencer,INFO] Run #536856 started
04:38:28.513 2023/08/03 [Sequencer,INFO] Run #536855 stopped
04:32:15.707 2023/08/03 [Sequencer,INFO] Run #536855 started
04:32:12.185 2023/08/03 [Sequencer,INFO] Run #536854 stopped
04:26:08.980 2023/08/03 [Sequencer,INFO] Run #536854 started
04:26:05.406 2023/08/03 [Sequencer,INFO] Run #536853 stopped
04:19:55.754 2023/08/03 [Sequencer,INFO] Run #536853 started
04:19:51.976 2023/08/03 [Sequencer,INFO] Run #536852 stopped
04:13:45.140 2023/08/03 [Sequencer,INFO] Run #536852 started
04:13:41.465 2023/08/03 [Sequencer,INFO] Run #536851 stopped
04:06:45.891 2023/08/03 [Sequencer,INFO] Run #536851 started
04:06:42.253 2023/08/03 [Sequencer,INFO] Run #536850 stopped
04:00:28.915 2023/08/03 [Sequencer,INFO] Run #536850 started
04:00:25.300 2023/08/03 [Sequencer,INFO] Run #536849 stopped
03:54:15.851 2023/08/03 [Sequencer,INFO] Run #536849 started
03:54:12.372 2023/08/03 [Sequencer,INFO] Run #536848 stopped
03:47:53.825 2023/08/03 [Sequencer,INFO] Run #536848 started
03:47:50.240 2023/08/03 [Sequencer,INFO] Run #536847 stopped
03:41:50.429 2023/08/03 [Sequencer,INFO] Run #536847 started
03:41:46.892 2023/08/03 [Sequencer,INFO] Run #536846 stopped
03:35:41.247 2023/08/03 [Sequencer,INFO] Run #536846 started
03:35:37.480 2023/08/03 [Sequencer,INFO] Run #536845 stopped
03:29:33.930 2023/08/03 [Sequencer,INFO] Run #536845 started
03:29:30.453 2023/08/03 [Sequencer,INFO] Run #536844 stopped
03:23:07.931 2023/08/03 [Sequencer,INFO] Run #536844 started
03:23:04.214 2023/08/03 [Sequencer,INFO] Run #536843 stopped
03:17:01.227 2023/08/03 [Sequencer,INFO] Run #536843 started
03:16:57.611 2023/08/03 [Sequencer,INFO] Run #536842 stopped
03:10:48.030 2023/08/03 [Sequencer,INFO] Run #536842 started
03:10:44.255 2023/08/03 [Sequencer,INFO] Run #536841 stopped
03:04:32.608 2023/08/03 [Sequencer,INFO] Run #536841 started
03:04:28.881 2023/08/03 [Sequencer,INFO] Run #536840 stopped
02:58:22.218 2023/08/03 [Sequencer,INFO] Run #536840 started
02:58:18.228 2023/08/03 [Sequencer,INFO] Run #536839 stopped
02:51:50.716 2023/08/03 [Sequencer,INFO] Run #536839 started
02:51:46.287 2023/08/03 [Sequencer,INFO] Run #536838 stopped
02:45:31.191 2023/08/03 [Sequencer,INFO] Run #536838 started
02:45:27.463 2023/08/03 [Sequencer,INFO] Run #536837 stopped
02:39:24.271 2023/08/03 [Sequencer,INFO] Run #536837 started
02:39:20.694 2023/08/03 [Sequencer,INFO] Run #536836 stopped
02:33:08.324 2023/08/03 [Sequencer,INFO] Run #536836 started
02:33:04.757 2023/08/03 [Sequencer,INFO] Run #536835 stopped
02:27:03.014 2023/08/03 [Sequencer,INFO] Run #536835 started
02:26:58.734 2023/08/03 [Sequencer,INFO] Run #536834 stopped
02:20:27.209 2023/08/03 [Sequencer,INFO] Run #536834 started
02:20:23.695 2023/08/03 [Sequencer,INFO] Run #536833 stopped
02:14:14.607 2023/08/03 [Sequencer,INFO] Run #536833 started
02:14:11.131 2023/08/03 [Sequencer,INFO] Run #536832 stopped
02:07:43.853 2023/08/03 [Sequencer,INFO] Run #536832 started
02:07:40.091 2023/08/03 [Sequencer,INFO] Run #536831 stopped
02:01:05.642 2023/08/03 [Sequencer,INFO] Run #536831 started
02:01:01.975 2023/08/03 [Sequencer,INFO] Run #536830 stopped
01:54:55.768 2023/08/03 [Sequencer,INFO] Run #536830 started
01:54:51.901 2023/08/03 [Sequencer,INFO] Run #536829 stopped
01:48:43.247 2023/08/03 [Sequencer,INFO] Run #536829 started
01:48:39.525 2023/08/03 [Sequencer,INFO] Run #536828 stopped
01:42:26.066 2023/08/03 [Sequencer,INFO] Run #536828 started
01:42:22.294 2023/08/03 [Sequencer,INFO] Run #536827 stopped
01:36:10.218 2023/08/03 [Sequencer,INFO] Run #536827 started
01:36:06.352 2023/08/03 [Sequencer,INFO] Run #536826 stopped
01:30:03.121 2023/08/03 [Sequencer,INFO] Run #536826 started
01:29:59.558 2023/08/03 [Sequencer,INFO] Run #536825 stopped
01:23:50.397 2023/08/03 [Sequencer,INFO] Run #536825 started
01:23:46.823 2023/08/03 [Sequencer,INFO] Run #536824 stopped
01:17:28.309 2023/08/03 [Sequencer,INFO] Run #536824 started
01:17:24.641 2023/08/03 [Sequencer,INFO] Run #536823 stopped
01:11:11.245 2023/08/03 [Sequencer,INFO] Run #536823 started
01:11:07.680 2023/08/03 [Sequencer,INFO] Run #536822 stopped
01:04:57.774 2023/08/03 [Sequencer,INFO] Run #536822 started
01:04:54.143 2023/08/03 [Sequencer,INFO] Run #536821 stopped
00:58:52.150 2023/08/03 [Sequencer,INFO] Run #536821 started
00:58:48.569 2023/08/03 [Sequencer,INFO] Run #536820 stopped
00:52:19.523 2023/08/03 [Sequencer,INFO] Run #536820 started
00:52:15.857 2023/08/03 [Sequencer,INFO] Run #536819 stopped
00:45:33.032 2023/08/03 [Sequencer,INFO] Run #536819 started
00:45:29.201 2023/08/03 [Sequencer,INFO] Run #536818 stopped
00:39:19.076 2023/08/03 [Sequencer,INFO] Run #536818 started
00:39:15.510 2023/08/03 [Sequencer,INFO] Run #536817 stopped
00:32:50.593 2023/08/03 [Sequencer,INFO] Run #536817 started
00:32:47.035 2023/08/03 [Sequencer,INFO] Run #536816 stopped
00:26:09.730 2023/08/03 [Sequencer,INFO] Run #536816 started
00:26:05.862 2023/08/03 [Sequencer,INFO] Run #536815 stopped
00:19:57.831 2023/08/03 [Sequencer,INFO] Run #536815 started
00:19:53.408 2023/08/03 [Sequencer,INFO] Run #536814 stopped
00:13:41.084 2023/08/03 [Sequencer,INFO] Run #536814 started
00:13:37.504 2023/08/03 [Sequencer,INFO] Run #536813 stopped
00:07:24.877 2023/08/03 [Sequencer,INFO] Run #536813 started
00:07:21.339 2023/08/03 [Sequencer,INFO] Run #536812 stopped
00:01:18.670 2023/08/03 [Sequencer,INFO] Run #536812 started
00:01:14.751 2023/08/03 [Sequencer,INFO] Run #536811 stopped
23:55:12.073 2023/08/02 [Sequencer,INFO] Run #536811 started
23:55:08.493 2023/08/02 [Sequencer,INFO] Run #536810 stopped
23:53:35.294 2023/08/02 [mhttpd,ERROR] [history_schema.cxx:2345:FileHistory::read_data,ERROR] Cannot read '/data2/history/mhf_1690890685_20230801_dc_hv.dat', read() errno 2 (No such file or directory)
23:48:55.498 2023/08/02 [Sequencer,INFO] Run #536810 started
23:48:51.817 2023/08/02 [Sequencer,INFO] Run #536809 stopped
23:42:30.422 2023/08/02 [Sequencer,INFO] Run #536809 started
23:42:26.677 2023/08/02 [Sequencer,INFO] Run #536808 stopped
23:36:23.171 2023/08/02 [Sequencer,INFO] Run #536808 started
23:36:19.592 2023/08/02 [Sequencer,INFO] Run #536807 stopped
23:30:19.344 2023/08/02 [Sequencer,INFO] Run #536807 started
23:30:15.672 2023/08/02 [Sequencer,INFO] Run #536806 stopped
23:24:03.697 2023/08/02 [Sequencer,INFO] Run #536806 started
23:23:59.570 2023/08/02 [Sequencer,INFO] Run #536805 stopped
23:17:33.870 2023/08/02 [Sequencer,INFO] Run #536805 started
23:17:30.488 2023/08/02 [Sequencer,INFO] Run #536804 stopped
23:11:21.650 2023/08/02 [Sequencer,INFO] Run #536804 started
23:11:18.176 2023/08/02 [Sequencer,INFO] Run #536803 stopped
23:05:00.652 2023/08/02 [Sequencer,INFO] Run #536803 started
23:04:56.880 2023/08/02 [Sequencer,INFO] Run #536802 stopped
22:58:59.679 2023/08/02 [Sequencer,INFO] Run #536802 started
22:58:56.249 2023/08/02 [Sequencer,INFO] Run #536801 stopped
22:52:43.033 2023/08/02 [Sequencer,INFO] Run #536801 started
22:52:39.452 2023/08/02 [Sequencer,INFO] Run #536800 stopped
22:46:37.568 2023/08/02 [Sequencer,INFO] Run #536800 started
22:46:33.953 2023/08/02 [Sequencer,INFO] Run #536799 stopped
22:40:28.270 2023/08/02 [Sequencer,INFO] Run #536799 started
22:40:24.906 2023/08/02 [Sequencer,INFO] Run #536798 stopped
22:33:53.886 2023/08/02 [Sequencer,INFO] Run #536798 started
22:33:50.529 2023/08/02 [Sequencer,INFO] Run #536797 stopped
22:27:35.712 2023/08/02 [Sequencer,INFO] Run #536797 started
22:27:32.270 2023/08/02 [Sequencer,INFO] Run #536796 stopped
22:21:26.568 2023/08/02 [Sequencer,INFO] Run #536796 started
22:21:23.007 2023/08/02 [Sequencer,INFO] Run #536795 stopped
22:15:25.397 2023/08/02 [Sequencer,INFO] Run #536795 started
22:15:21.933 2023/08/02 [Sequencer,INFO] Run #536794 stopped
22:09:18.390 2023/08/02 [Sequencer,INFO] Run #536794 started
22:09:14.976 2023/08/02 [Sequencer,INFO] Run #536793 stopped
22:02:59.421 2023/08/02 [Sequencer,INFO] Run #536793 started
22:02:56.075 2023/08/02 [Sequencer,INFO] Run #536792 stopped
21:56:39.940 2023/08/02 [Sequencer,INFO] Run #536792 started
21:56:36.518 2023/08/02 [Sequencer,INFO] Run #536791 stopped
21:50:39.308 2023/08/02 [Sequencer,INFO] Run #536791 started
21:50:35.893 2023/08/02 [Sequencer,INFO] Run #536790 stopped
21:44:27.002 2023/08/02 [Sequencer,INFO] Run #536790 started
21:44:23.435 2023/08/02 [Sequencer,INFO] Run #536789 stopped
21:38:23.480 2023/08/02 [Sequencer,INFO] Run #536789 started
21:38:20.087 2023/08/02 [Sequencer,INFO] Run #536788 stopped
21:31:57.894 2023/08/02 [Sequencer,INFO] Run #536788 started
21:31:54.508 2023/08/02 [Sequencer,INFO] Run #536787 stopped
21:26:00.453 2023/08/02 [Sequencer,INFO] Run #536787 started
21:25:57.011 2023/08/02 [Sequencer,INFO] Run #536786 stopped
21:20:00.772 2023/08/02 [Sequencer,INFO] Run #536786 started
21:19:57.301 2023/08/02 [Sequencer,INFO] Run #536785 stopped
21:13:46.342 2023/08/02 [Sequencer,INFO] Run #536785 started
21:13:42.774 2023/08/02 [Sequencer,INFO] Run #536784 stopped
21:07:24.345 2023/08/02 [Sequencer,INFO] Run #536784 started
21:07:20.974 2023/08/02 [Sequencer,INFO] Run #536783 stopped
21:00:34.335 2023/08/02 [Sequencer,INFO] Run #536783 started
21:00:30.962 2023/08/02 [Sequencer,INFO] Run #536782 stopped
20:54:26.725 2023/08/02 [Sequencer,INFO] Run #536782 started
20:54:23.260 2023/08/02 [Sequencer,INFO] Run #536781 stopped
20:48:17.056 2023/08/02 [Sequencer,INFO] Run #536781 started
20:48:13.680 2023/08/02 [Sequencer,INFO] Run #536780 stopped
20:41:54.420 2023/08/02 [Sequencer,INFO] Run #536780 started
20:41:51.061 2023/08/02 [Sequencer,INFO] Run #536779 stopped
20:35:50.859 2023/08/02 [Sequencer,INFO] Run #536779 started
20:35:47.280 2023/08/02 [Sequencer,INFO] Run #536778 stopped
20:29:51.914 2023/08/02 [Sequencer,INFO] Run #536778 started
20:29:48.259 2023/08/02 [Sequencer,INFO] Run #536777 stopped
20:23:41.311 2023/08/02 [Sequencer,INFO] Run #536777 started
20:23:37.784 2023/08/02 [Sequencer,INFO] Run #536776 stopped
20:17:25.427 2023/08/02 [Sequencer,INFO] Run #536776 started
20:17:21.759 2023/08/02 [Sequencer,INFO] Run #536775 stopped
20:11:15.119 2023/08/02 [Sequencer,INFO] Run #536775 started
20:11:11.604 2023/08/02 [Sequencer,INFO] Run #536774 stopped
20:05:05.195 2023/08/02 [Sequencer,INFO] Run #536774 started
20:05:01.833 2023/08/02 [Sequencer,INFO] Run #536773 stopped
19:59:04.956 2023/08/02 [Sequencer,INFO] Run #536773 started
19:59:01.477 2023/08/02 [Sequencer,INFO] Run #536772 stopped
19:52:59.175 2023/08/02 [Sequencer,INFO] Run #536772 started
19:52:55.092 2023/08/02 [Sequencer,INFO] Run #536771 stopped
19:46:40.384 2023/08/02 [Sequencer,INFO] Run #536771 started
19:46:36.999 2023/08/02 [Sequencer,INFO] Run #536770 stopped
19:40:31.744 2023/08/02 [Sequencer,INFO] Run #536770 started
19:40:28.278 2023/08/02 [Sequencer,INFO] Run #536769 stopped
19:34:17.986 2023/08/02 [Sequencer,INFO] Run #536769 started
19:34:14.533 2023/08/02 [Sequencer,INFO] Run #536768 stopped
19:28:11.473 2023/08/02 [Sequencer,INFO] Run #536768 started
19:28:08.058 2023/08/02 [Sequencer,INFO] Run #536767 stopped
19:22:01.786 2023/08/02 [Sequencer,INFO] Run #536767 started
19:21:58.413 2023/08/02 [Sequencer,INFO] Run #536766 stopped
19:15:54.577 2023/08/02 [Sequencer,INFO] Run #536766 started
ELOG V3.1.4-2e1708b5