Back Midas Rome Roody Rootana
  Midas DAQ System, Page 111 of 138  Not logged in ELOG logo
ID Date Authordown Topic Subject
  2482   27 Apr 2023 Konstantin OlchanskiSuggestionMaximum ODB size
my vote is to bump the ODB size limit to 1999*1000*1000 (not quite 2GB). but this needs to be tested. especially save and restore from ODB, XML and JSON files, including how long it takes to save and load a 1.9GB ODB. K.O.
  2488   28 Apr 2023 Konstantin OlchanskiSuggestionMaximum ODB size
> > Congratulations. created != "it works".
> 
> Two other tings to consider:
> 
> 1) The ODB shared memory is dumped into a binary file (".ODB.SHM") after the last client finished and read if the first client starts, to get it persistent. 
> So this could slow down starting and stopping, but only the first client, so I guess it's not an issue.
>

typical disk writing speed is 100-1000 Mbytes/sec, so writing 1 GB .ODB.SHM will take 1-10 seconds. NFS over 1gige network is 100 Mbytes/sec, so 10 seconds to 
write .ODB.SHM. embedded ARM write speed to SD flash can be as low as 10 Mbytes/sec, so up to 100 seconds.

> 
> 2) Traditionally, the ODB gets dumped to the .mid file at the beginning and end of every run, so that one know the exact configuration of the experiment
> for offline analysis. This can be turned off of course, but most experiments use it. If the ODB is dumped in any ASCII format, this can take quite long.
> Assume it takes 10 seconds at the beginning of each run, and we take a run every five minutes. Then we loose 48 mins of precious beam time every day.
> 

new default is to save as JSON, (as of my last measurement) JSON encoder is faster than the XML (and ODB?) encoder, by default result is compressed by GZIP-1 (66 
Mbytes/sec is my old benchmark, should remeasure on new DDR5 machines), compressed JSON is written .mid.gz file at disk speed (as above). Alternatively, use LZ4 
compression, runs roughly at memcpy() speed, less compression, written to .mid.lz4 at disk speed.

if data storage is ZFS, ZFS built-in LZ4 compression is now enabled by default, so result writing uncompressed .mid file (no compression of ODB dump), should be 
roughly same as when using MIDAS LZ4 compression and writing .mid.lz4.

bottom line, I need to remeasure gzip and lz4 compression speeds on new computers (DDR4 AMD 5000 series and DDR5 AMD 7000 series).

K.O.
  2489   28 Apr 2023 Konstantin OlchanskiSuggestionMaximum ODB size
> > Is this maybe related to what Stefan said about the run start - so that odbedit needs some time to load the bigger ODB?
> 
> At the run start mlogger writes the ODB to the .mid file. This needs conversion (binary ODB -> XML ASCII) which can take time.
> This does NOT depend on the ODB size, but on the ODB *content*.
>

Yes and no. They must be storing more than 100 Mbytes of stuff in ODB, if they are asking to bump ODB size from 100 Mbyte to 2 GByte ODB.

On the MIDAS, side, though, we have to plan for the worst case, if max ODB size 1.9 GB and it is full of data,
and mlogger (and odbedit save and load) take 10-30 seconds, then at least all timeouts (watchdog timeout, RPC timeout, etc)
must be increased accordingly.

K.O.
  2490   28 Apr 2023 Konstantin OlchanskiForumProblem with running midas odbxx frontends on a remote machine using the -h option
> As i said we can easily reproduce this with midas/examples/odbxx/odbxx_test.cpp  (with cm_connect_experiment changed to "localhost")
> [test,ERROR] [system.cxx:5104:recv_tcp2,ERROR] unexpected connection closure
> [test,ERROR] [system.cxx:5158:ss_recv_net_command,ERROR] error receiving network command header, see messages
> [test,ERROR] [midas.cxx:13900:rpc_call,ERROR] routine "db_copy_xml": error, ss_recv_net_command() status 411, program abort

ok, cool. looks like we crashed the mserver. either run mserver attached to gdb or enable mserver core dump, we need it's stack trace,
the correct stack trace should be rooted in the handler for db_copy_xml.

but most likely odbxx is asking for more data than can be returned through the MIDAS RPC.

what is the ODB key passed to db_copy_xml() and how much data is in ODB at that key? (odbedit "du", right?).

K.O.
  2492   28 Apr 2023 Konstantin OlchanskiForumProblem with running midas odbxx frontends on a remote machine using the -h option
> > > As i said we can easily reproduce this with midas/examples/odbxx/odbxx_test.cpp
> > ok, cool. looks like we crashed the mserver.
> Ok. Maybe i have to make this more clear. ANY odbxx access of a remote odb reproduces this error for us on multiple machines. 
> It does not matter how much data odbxx is asking for.
> midas commit 9d2ef471 the code from above runs fine

so, a regression. ouch.

if core dumps are turned off, you will not "see" the mserver crash, because the main mserver is still running. it's the mserver forked to 
serve your RPC connection that crashes.

> int main() {
>    cm_connect_experiment("localhost", "Mu3e", "test", NULL);
>    midas::odb o = {
>       {"Int32 Key", 42}
>    };
>    o.connect("/Test/Settings");
>    cm_disconnect_experiment();
>    return 1;
> }

to debug this, after cm_connect_experiment() one has to put ::sleep(1000000000); (not that big, obviously),
then while it is sleeping do "ps -efw | grep mserver", this will show the mserver for the test program,
connect to it with gdb, wait for ::sleep() to finish and o.connect() to crash, with luck gdb will show
the crash stack trace in the mserver.

so easy to debug? this is why back in the 1970-ies clever people invented core dumps, only to have
even more clever people in the 2020-ies turn them off and generally make debugging more difficult (attaching
gdb to a running program is also disabled-by-default in some recent linuxes).

rant-off.

to check if core dumps work, to "killall -7 mserver". to enable core dumps on ubuntu, see here:
https://daq00.triumf.ca/DaqWiki/index.php/Ubuntu

last known-working point is:

commit 9d2ef471c4e4a5a325413e972862424549fa1ed5
Author: Ben Smith <bsmith@triumf.ca>
Date:   Wed Jul 13 14:45:28 2022 -0700

    Allow odbxx to handle connecting to "/" (avoid trying to read subkeys as "//Equipment" etc.

K.O.
  2515   12 May 2023 Konstantin OlchanskiInfoNew environment variable MIDAS_EXPNAME
> A new environment variable MIDAS_EXPNAME has been introduced [to be used together with 
MIDAS_DIR]

This is fixes an important buglet. If experiment uses MIDAS_DIR instead of exptab, at the time 
of connecting to ODB, we do not know the experiment name and use name "Default" to create ODB 
shared memory, instead of actual experiment name.

This creates an inconsistency, if some MIDAS programs in the same experiment use MIDAS_DIR while 
others use exptab (this would be unusual, but not impossible) they would connect to two 
different ODB shared memories, former using name "Default", latter using actual experiment name.

As an indication that something is not right, when stopping MIDAS programs, there is an error 
message about failure to delete shared ODB shared memory (because it was created using name 
"Default" and delete using the correct experiment name fails).

Also it can cause co-mingling between two different experiments, depending on the type of shared 
memory used by MIDAS (see $MIDAS_DIR/.SHM_TYPE.TXT):
POSIX - (usually not used) not affected (experiment name is not used)
POSIXv2 - (usually not used) affected (shm name is "$EXP_NAME_ODB")
POSIXv3 - used on MacOS - affected (shm name is "$UID_$EXP_NAME_ODB" so "$UID_Default_ODB" will 
collide)
POSIXv4 - used on Linux - not affected (shm name includes $MIDAS_DIR which is different for 
different experiments)

K.O.
  2516   16 May 2023 Konstantin OlchanskiBug Reportexcessive logging of http requests
Our default configuration of apache httpd logs every request. MIDAS custom web pages can easily make a huge number of RPC calls creating a 
huge log file and filling system disk to 100% capacity

this example has around 100 RPC requests per second. reasonable/unreasonable? available hardware can handle it (web browser, network 
httpd, mhttpd, etc), so we should try to get this to work. perhaps filter the apache httpd logs to exclude mjsonrpc requests? of course we 
can ask the affected experiment why they make so many RPC calls, is there a bug?

[14/May/2023:03:49:01 -0700] 142.90.111.176 TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 "POST /?mjsonrpc HTTP/1.1" 299
[14/May/2023:03:49:01 -0700] 142.90.111.176 TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 "POST /?mjsonrpc HTTP/1.1" 299
[14/May/2023:03:49:01 -0700] 142.90.111.176 TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 "POST /?mjsonrpc HTTP/1.1" 299
[14/May/2023:03:49:01 -0700] 142.90.111.176 TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 "POST /?mjsonrpc HTTP/1.1" 299
[14/May/2023:03:49:01 -0700] 142.90.111.176 TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 "POST /?mjsonrpc HTTP/1.1" 299

K.O.
  2517   16 May 2023 Konstantin OlchanskiBug Reportexcessive logging of http requests
> Our default configuration of apache httpd logs every request. MIDAS custom web pages can easily make a huge number of RPC calls creating a 
> huge log file and filling system disk to 100% capacity

perhaps use existing logrotate, add limit on file size (size) and limit of 2 old log files (rotate).

/etc/logrotate.d/httpd

/var/log/httpd/*log { 
    size 100M 
    rotate 2 
    missingok 
    notifempty 
    sharedscripts 
    delaycompress 
    postrotate 
        /bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true 
    endscript 
} 

K.O.
  2525   09 Jun 2023 Konstantin OlchanskiSuggestionMaximum ODB size
> > 1) The ODB shared memory is dumped into a binary file (".ODB.SHM") after the last client finished ...

correction: ODB shared memory is saved to .ODB.SHM each time a client stops, this is db_close_database().

I have just run into a problem with this in the DRAGON experiment. At begin and end of run they run
a script that does a large number of "odbedit" calls to read stuff from ODB and it was taking a very long time.
Each odbedit invocation was taking about 1 second, starting odbedit is quick, stopping odbedit takes about 1 second.

It turns out each invocation of odbedit saves .ODB.SHM, ODB was 100 Mbytes size, home disk is an HDD (~100-200 Mbytes/sec writing speed), so yes, about 1 second to 
stop odbedit.

Solution was to reduce ODB size from 100 Mbytes to 10 Mbytes, odbedit now run quickly, begin and end of run scripts run quickly. problem solved.

K.O.

P.S. no, I am not the dragon experiment, no, I did not write those scripts, no, I will not rewrite them, persons who wrote them are long gone, no, the persons running 
dragon today will not be rewriting them.
  2526   09 Jun 2023 Konstantin OlchanskiInfoadded IPv6 support for mserver and MIDAS RPC
as of commit 71fb5e82f3e9a1501b4dc2430f9193ee5d176c82, MIDAS RPC and the mserver 
listen for connections both on IPv4 and IPv6. mserver clients and MIDAS RPC 
clients can connect to MIDAS using both IPv4 and IPv6. In the default 
configuration ("/Expt/Security/Enable non-localhost RPC" set to "n"), IPv4 
localhost is used, as before. Support for IPv6 is a by product from switching 
from obsolete non-thread-safe gethostbyname() and getaddrbyname() to modern 
getaddrinfo() and getnameinfo(). This fixes bug 357, observed crash of mhttpd 
inside gethostbyname(). K.O.
  2528   12 Jun 2023 Konstantin OlchanskiSuggestionMaximum ODB size
> > correction: ODB shared memory is saved to .ODB.SHM each time a client stops, this is db_close_database().
> 
> The original design of the midas shared memory (back in the 1990's) was that the ODB shared memory file gets
> only saved into the .ODB.SHM when the *last* client exits. This ensures to keep the ODB persistent when the
> shared memory gets deleted. I vaguely remember I put something in like:
> 
> db_close_database()
> ...
>   destroy_flag = (pheader->num_clients == 0);
> 
>   if (destroy_flag)
>      ss_shm_flush(pheader->name, pdb->shm_adr, pdb->shm_size, pdb->shm_handle);

I remember the same, but I tracked it down in git to the very first commit, and there is no if() there,
odb is saved to .ODB.SHM on every client shutdown, not just the last client. I guess we both misremebered.

What's more, ss_shm_flush() is done while holding the ODB semaphore, so all other midas programs that try to access
odb at the same time (including the mserver) will stall until write() and close() return. at least we do not fsync(),
and there is no waiting until data is committed to physical media.

$ git annotate 3bb04af4d^ src/odb.c
...
ef8320177	(Stefan Ritt	1998-10-08 13:46:02 +0000	875)  destroy_flag = (pheader->num_clients == 0);
ef8320177	(Stefan Ritt	1998-10-08 13:46:02 +0000	876)
ef8320177	(Stefan Ritt	1998-10-08 13:46:02 +0000	877)  /* flush shared memory to disk */
ef8320177	(Stefan Ritt	1998-10-08 13:46:02 +0000	878)  ss_flush_shm(pheader->name, pheader, sizeof(DATABASE_HEADER)+2*pheader->data_size);
ef8320177	(Stefan Ritt	1998-10-08 13:46:02 +0000	879)
ef8320177	(Stefan Ritt	1998-10-08 13:46:02 +0000	880)  /* unmap shared memory, delete it if we are the last */
ef8320177	(Stefan Ritt	1998-10-08 13:46:02 +0000	881)  ss_close_shm(pheader->name, pheader,
ef8320177	(Stefan Ritt	1998-10-08 13:46:02 +0000	882)               _database[hDB-1].shm_handle, destroy_flag);
...

K.O.
  2535   13 Jun 2023 Konstantin OlchanskiSuggestionMaximum ODB size
> > I remember the same, but I tracked it down in git to the very first commit, and there is no if() there,
> > odb is saved to .ODB.SHM on every client shutdown, not just the last client. I guess we both misremebered.

small problem. build an experiment, start taking data, observe how ODB is never saved to disk because the "last client" never stops. as bonus, crash 
the computer, observe how all changes to ODB are now lost. if mlogger is configured to save odb.json at the end of run, and to write ODB dumps at 
begin and end of every data file, you can recover some of the lost data.

for better effect, ODB should be dumped to disk at periodic intervals. but. current implementation writes odb to disk while holding the ODB 
semaphore, which means all ODB access stops for the duration, specifically, there will be gaps in the history because mlogger cannot read history 
data from ODB.

a better implementation could take the ODB lock, make a copy of ODB shared memory, release the ODB lock, complete writing to disk without holding the 
lock. protection is needed against 100 midas programs trying to do this all at the same time. computers with 0.5 GB RAM (many ARM FPGA SoCs) will be 
limited to ~100 Mbyte ODB). plus deal with memory allocation failures when taking a copy of a 2GB ODB.

in theory, the mmap() shared memory (already implemented in midas) does this automatically, but we lose control
over disk writes, we see some OSes write odb to disk "too often" and at wrong times, i.e. while we are in the middle
of creating or deleting something. current sequence of open(), atomic write() and close() ensures ODB.SHM always
contains a valid odb. (minus loss of OS and disk caches to crash or power loss).

K.O.
  2536   13 Jun 2023 Konstantin OlchanskiSuggestionMaximum ODB size
> > BTW, how do I resize the ODB.

ODB cannot be resized "online". Everything has to stop, save content to odb.json, get rid of old ODB.SHM, ensure ODB shared memory is destroyed (SysV or POSIX shared memory), 
create new ODB with new size, load odb.json. Feel free to punch this into chatgpt > odbresize.cxx, commit, test, push.

> I remember we discussed this some time ago, and concluded that odbedit needs a resize flag.

ODB cannot be resized online. ODB API has ODB clients holding ODB handles which are pointers (offsets) into ODB shared memory.

> Has this even been done?
> I guess this is still not done and the issue is still open: https://bitbucket.org/tmidas/midas/issues/329/need-odbresize
> I guess if we touch this maybe the problem with the wrong size should be also fixed: https://bitbucket.org/tmidas/midas/issues/328/odbinit-s-1024mb-creates-odb-with-wrong

please contribute 14 distraction-free days to my patreon. thanks in advance!

K.O.
  2538   13 Jun 2023 Konstantin OlchanskiSuggestionMaximum ODB size
> 
> > small problem. build an experiment, start taking data, observe how ODB is never saved to disk because the "last client" never stops. as bonus, crash 
> > the computer, observe how all changes to ODB are now lost. if mlogger is configured to save odb.json at the end of run, and to write ODB dumps at 
> > begin and end of every data file, you can recover some of the lost 
> 
> The new behavior is not much worse than before. Assume 10 programs running happily for days, computer crashes, all ODB changes lost. 
> So indeed a periodic flush without holding the lock might be best. Use a semaphore to prevent all programs flushing at the same time, or put
> the flush only in the logger after an end of run.

are you sure? when/how often does "last midas program finishes" happen? it does not happen on a system crash, not on power loss, not on "shutdown -r now" 
(I am pretty sure). In the experiments you run, how often do you shut down all programs (and check that you did not forget one somehow)?

sanity check. dragon experiment, very active, .ODB.SHM timestamp is 1 second old. not-very-active agmini, today is June 13th, timestamp of .ODB.SHM is June 
2nd. inactive TACTIC, timestamp of .ODB.SHM is May 16th.

so yes, not great, but in the new scheme, ODB.SHM timestamps would probably be from 2021 or 2020.

my vote is to undo this change, it is dangerous because it causes odb to be saved to ODB.SHM never.

K.O.
  2541   15 Jun 2023 Konstantin OlchanskiSuggestionMaximum ODB size
> 
> > are you sure? when/how often does "last midas program finishes" happen? it does not happen on a system crash, not on power loss, not on "shutdown -r now" 
> > (I am pretty sure). In the experiments you run, how often do you shut down all programs (and check that you did not forget one somehow)?
> 
> Indeed this is almost never the case, maybe once per months. On the other hand, we have a complete crash of the os maybe once a year. Most of the time the programs 
> run continuously (we do not need odbedit), so our timestamp is typically one or two days old, so not good either.
> 
> > my vote is to undo this change, it is dangerous because it causes odb to be saved to ODB.SHM never.
> 
> My vote is to flush the odb either periodically or after each run.
> 

So we are in agreement.

RFE filed:
https://bitbucket.org/tmidas/midas/issues/367/odb-should-be-saved-to-disk-periodically

Dangerous change reverted:
60e4c44ad66346b89ba057391acf7a02890049be

K.O.

bash-3.2$ git diff
diff --git a/src/odb.cxx b/src/odb.cxx
index 0d3b88c2..d104ff28 100644
--- a/src/odb.cxx
+++ b/src/odb.cxx
@@ -2199,7 +2199,14 @@ INT db_close_database(HNDLE hDB)
       destroy_flag = (pheader->num_clients == 0);
 
       /* flush shared memory to disk */
-      if (destroy_flag)
+
+      /* if we save ODB to disk only after last client finishes, we will never save ODB to disk
+         in most experiments - none of them ever completely stop MIDAS in normal operation.
+         as result, all changes to ODB contents will be lost on system crash, power loss
+         or normal reboot. see https://daq00.triumf.ca/elog-midas/Midas/2539
+         K.O. June 2023. */
+
+      if (1 || destroy_flag)
          ss_shm_flush(pheader->name, pdb->shm_adr, pdb->shm_size, pdb->shm_handle);
 
       strlcpy(xname, pheader->name, sizeof(xname));

K.O.
  2556   18 Jul 2023 Konstantin OlchanskiForumpull request for PostgreSQL support
> is there any news regarding this pull request ? 
> (https://bitbucket.org/tmidas/midas/pull-requests/30)

apologies for taking a very long time to review the proposed changes.

the main problem with this pull request remains, it tangles together too many changes to the code and I cannot simply 
say "this is okey", merge and commit it.

example of unrelated change is diff of mlogger.cxx, change of function in: "db_get_value(hDB, 0, "/Logger/Multithread 
transitions" ... )". there is also unrelated changes to whitespace sprinkled around.

can you review your diffs again and try to remove as much unrelated and unnecessary changes as you can?

I could do this for you, and merge my version, but next time you merge base midas, you will have a collision.

unrelated change of function is introduction of something called "downsampling", what is the purpose of this? How is it 
different from requesting binned data? Is it just a kludge to reduce the data size? Before we merge it, can you post a 
description/discussion to this forum here? (as a separate topic, separate from discussion of PostgreSQL merge).

the changes to add PostgreSQL so fat look reasonable:
- CMakeLists, is always painful but if you do same a MySQL, should be okey, we always end up rejigging this several 
times before it works everywhere.
- history.h, ok, minus changes for adding the "downsample" feature
- mlogger.cxx, changes are too tangled with "downsample" feature, cannot review
- SetDownsample() API is defective, should have separate Get() and Set() functions
- history_common.cxx, please do not add downsampling code to history providers that do not/will not support it.
- history_odbc.cxx, please do not change it. it does not support downsampling and never will.
- history.cxx, ditto
- mjsonrpc.cxx, history API is changed, we must know: is new JS compatible with old mhttpd? is old JS compatible with 
new mhttpd? (mixed versions are very common in practice). if there is incompatibility, can you recoded it to be 
compatible?
- history_schema.cxx: bitbucket diff is a dog's breakfast, cannot review. I will have to checkout your branch and diff 
by hand.

changes to mhistory.js appear to be extensive and some explanation is needed for what is changed, what bugs/problems 
are fixed, what new features are added.

to move forward, can you generate a pull requests that only adds pgsql to history_schema.cxx, history_common.cxx and 
mlogger.cxx and does not add any other functions, features and does not change any whistespace?

K.O.


> 
> If you agree to merge I can resolve conflicts that now 
> (after two months) are listed...
> 
> Regards,
> Gennaro
> 
> > 
> > Hi,
> > I have updated the PR with a new one that includes TimescaleDB support and some
> > changes to mhistory.js to support downsampling queries...
> > 
> > Cheers,
> > Gennaro
> > 
> > > > some minutes ago I published a PR for PostgreSQL support I developed
> > > > at INFN-Napoli for Darkside experiment...
> > > > 
> > > > I don't know if you receive a notification about this PR and in doubt
> > > > I wrote this message...
> > > 
> > > Hi, Gennaro, thank you for the very useful contribution. I saw the previous version 
> > > of your pull request and everything looked quite good. But that pull request was 
> > > for an older version of midas and it would not have applied cleanly to the current 
> > > version. I will take a look at your updated pull request. In theory it should only 
> > > add the Postgres class and modify a few other places in history_schema.cxx and have 
> > > no changes to anything else. (if you need those changes, it should be a separate 
> > > pull request).
> > > 
> > > Also I am curious what benefits and drawbacks of Postgres vs mysql/mariadb you have 
> > > observed for storing and using midas history data.
> > > 
> > > K.O.
  2557   18 Jul 2023 Konstantin OlchanskiBug Reportaccess to filesystem through mhttpd
> (e.g. http://midas.host:8080/etc/passwd)

not again! I complained about this before, and I added a fix, but it must be broken again.

getting a copy of /etc/passwd is reasonably benign, but getting a copy of 
/home/$USER/.ssh/id_rsa, id_rsa.pub, knownhosts and authorized_keys is a disaster.

(running mhttpd behind a web proxy does not solve the problem, number of attackers is 
reduced to only the people who know the proxy password and to local users).

K.O.
  2559   21 Jul 2023 Konstantin OlchanskiForumpull request for PostgreSQL support
> > is there any news regarding this pull request ? 
> > (https://bitbucket.org/tmidas/midas/pull-requests/30)
> 
> apologies for taking a very long time to review the proposed changes.
> 

I merged the PgSql bits by hand - the automatic tools make a dog's breakfast from the history_schema.cxx diffs. Ouch.

history_schema.cxx merged pretty much cleanly, but I have one question about CreateSqlColumn() with sql_strict set to "true". Can you say 
more why this is needed? Should this also be made the default for MySQL? The best I can tell the default values are only needed if we write 
to SQL but forget to provide values that should not be NULL? But our code never does this? Or this is for reading from SQL, where NULL values 
are replaced with the default values? I do not have time to look into this right now, I hope you can clarify it for me?

Also notice the fDownsample is set to zero and cannot be changed. I recommend we set it through the MakeMidasHistoryPgsql() factory method.

Please pull, merge, retest, update the pull request, check that there is no unrelated changes (changes in mlogger.cxx is a direct red flag!) 
and we should be able to merge the rest of your stuff pronto.

K.O.

commit e85bb6d37c85f02fc4895cae340ba71ab36de906 (HEAD -> develop, origin/develop, origin/HEAD)
Author: Konstantin Olchanski <olchansk@triumf.ca>
Date:   Fri Jul 21 09:45:08 2023 -0700

    merge PQSQL history in history_schema.cxx

commit f254ebd60a23c6ee2d4870f3b6b5e8e95a8f1f09
Author: Konstantin Olchanski <olchansk@triumf.ca>
Date:   Fri Jul 21 09:19:07 2023 -0700

    add PGSQL Makefile bits

commit aa5a35ba221c6f87ae7a811236881499e3d8dcf7
Author: Konstantin Olchanski <olchansk@triumf.ca>
Date:   Fri Jul 21 08:51:23 2023 -0700

    merge PGSQL support from https://bitbucket.org/gtortone/midas/branch/feature/timescaledb_support except for history_schema.cxx
  2567   02 Aug 2023 Konstantin OlchanskiForumIssues with Universe II Driver
I maintain the tsi148 and the universe-II drivers. I confirm -KO6 is my latest 
version, last updated for 32-bit Debian-11, and we still use it at TRIUMF.

It is good news that the vme_universe kernel module built, loaded and reported 
correct stuff to dmesg.

It is not clear why mvme_read_value() crashed. We need to know the value of 
vme_addr and addr, can you add printf()s for them using format "%08x" and try 
again?

K.O.

<p>&nbsp;</p>

<table align="center" cellspacing="1" style="border:1px solid #486090; 
width:98%">
	<tbody>
		<tr>
			<td style="background-color:#486090">Caleb Marshall 
wrote:</td>
		</tr>
		<tr>
			<td style="background-color:#FFFFB0">
			<p>Hello,</p>

			<p>At our lab we are currently in the process of 
migrating more of our systems over to Midas. However, all of our working systems 
are dependent on SBCs with the Tsi-148 chips of which we only have a handful. In 
order to have some backups and spares for testing, we have been attempting to 
get Midas working with some borrowed SBCs (Concurrent Technologies VX 40x/04x) 
with Universe-II chips. The SBC is running&nbsp;CentOS 7.&nbsp;I have tried to 
follow the instructions posted <a 
href="https://daq00.triumf.ca/DaqWiki/index.php/VME-
CPU#V7648_and_V7750_BIOS_Settings">here</a>. The universe-II kernel module 
appears to load correctly, dmesg gives:</p>

			<p>[ &nbsp; 32.384826] VME: Board is system 
controller<br />
			[ &nbsp; 32.384875] VME: Driver compiled for SMP 
system<br />
			[ &nbsp; 32.384877] VME: Installed VME Universe module 
version: 3.6.KO6<br />
			&nbsp;</p>

			<p>However, running vmescan.exe fails with a segfault. 
Running with gdb shows:</p>

			<p>vmic_mmap: Mapped VME AM 0x0d addr 0x00000000 size 
0x00ffffff at address 0x80a01000<br />
			mvme_open:<br />
			Bus handle &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 
&nbsp;= 0x7<br />
			DMA handle &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 
&nbsp;= 0x6045d0<br />
			DMA area size &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; = 
1048576 bytes<br />
			DMA &nbsp; &nbsp;physical address = 0x7ffff7eea000<br />
			vmic_mmap: Mapped VME AM 0x2d addr 0x00000000 size 
0x0000ffff at address 0x86ff0000</p>

			<p>Program received signal SIGSEGV, Segmentation fault.
<br />
			mvme_read_value (mvme=0x604010, vme_addr=&lt;optimized 
out&gt;)<br />
			&nbsp; &nbsp; at 
/home/jam/midas/packages/midas/drivers/vme/vmic/vmicvme.c:352<br />
			352&nbsp;&nbsp; &nbsp; &nbsp; &nbsp;dst &nbsp;= *((WORD 
*)addr);<br />
			&nbsp;</p>

			<p>With the pointer addr originating from a call 
to&nbsp;vmic_mapcheck within the&nbsp;&nbsp;mvme_read_value functions in the 
vmicvme.c file. Help with where to go from here would be appreciated.</p>

			<p>-Caleb&nbsp;</p>

			<p>&nbsp;</p>
			</td>
		</tr>
	</tbody>
</table>

<p>&nbsp;</p>
  2568   02 Aug 2023 Konstantin OlchanskiBug Reportexcessive logging of http requests
> > Our default configuration of apache httpd logs every request. MIDAS custom web pages can easily make a huge number of RPC calls creating a 
> > huge log file and filling system disk to 100% capacity
> perhaps use existing logrotate, add limit on file size (size) and limit of 2 old log files (rotate).

logrotate was ineffective.

following apache httpd config seems to disable logging of mjsonrpc requests. note that we cannot filter on the "mjsonrpc" string because 
Request_URI excludes the query string (ouch!).

#SetEnvIf Request_URI "^POST /?mjsonrpc.*" nolog 
SetEnvIf Request_Method "POST" envpost 
SetEnvIf Request_URI "^\/$" envuri 
SetEnvIfExpr "-T reqenv('envpost') && -T reqenv('envuri')" envnolog 
 
CustomLog logs/ssl_request_log "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" env=!envnolog 

K.O.
ELOG V3.1.4-2e1708b5