Back Midas Rome Roody Rootana
  Midas DAQ System, Page 1 of 151  Not logged in ELOG logo
ID Date Author Topic Subject
  3051   07 Jun 2025 Mark GrimesBug ReportMemory leak in mhttpd binary RPC code

Hi,

We applied an intermediate fix for this locally and it seems to have fixed our issue.  The attached plot shows the percentage memory use on our machine with 128 Gb memory, as a rough proxy for mhttpd memory use.  After applying our fix mhttpd seems to be happy using ~7% of the memory after being up for 2.5 days.

Our fix to mjson was:

diff --git a/mjson.cxx b/mjson.cxx

index 17ee268..2443510 100644

--- a/mjson.cxx

+++ b/mjson.cxx

@@ -654,8 +654,7 @@ MJsonNode::~MJsonNode() // dtor

       delete subnodes[i];

    subnodes.clear();

 

-   if (arraybuffer_size > 0) {

-      assert(arraybuffer_ptr != NULL);

+   if (arraybuffer_ptr != NULL) {

       free(arraybuffer_ptr);

       arraybuffer_size = 0;

       arraybuffer_ptr = NULL;

We also applied the following in midas for good measure, although I don't think it contributed to the leak we were seeing:

diff --git a/src/mjsonrpc.cxx b/src/mjsonrpc.cxx

index 2201d228..38f0b99b 100644

--- a/src/mjsonrpc.cxx

+++ b/src/mjsonrpc.cxx

@@ -3454,6 +3454,7 @@ static MJsonNode* brpc(const MJsonNode* params)

    status = cm_connect_client(name.c_str(), &hconn);

 

    if (status != RPC_SUCCESS) {

+      free(buf);

       return mjsonrpc_make_result("status", MJsonNode::MakeInt(status));

    }

I hope this is useful to someone.  As previously mentioned we make heavy use of binary RPC, so maybe other experiments don't run into the same problem.

Thanks,

Mark.

  3050   04 Jun 2025 Konstantin OlchanskiBug ReportMemory leak in mhttpd binary RPC code
Noted. I will look at this asap. K.O.

[quote="Mark Grimes"]Hi,
During an evening of running we noticed that memory usage of mhttpd grew to 
close to 100Gb.  We think we've traced this to the following issue when making 
RPC calls.

[LIST]
[*] The brpc method allocates memory for the response at 
[URL=https://bitbucket.org/tmidas/midas/src/67db8627b9ae381e5e28800dfc4c350c5bd0
5e3f/src/mjsonrpc.cxx#lines-3449]src/mjsonrpc.cxx#lines-3449[/URL].
[*] It then makes the call at 
[URL=https://bitbucket.org/tmidas/midas/src/67db8627b9ae381e5e28800dfc4c350c5bd0
5e3f/src/mjsonrpc.cxx#lines-3460]src/mjsonrpc.cxx#lines-3460[/URL], which may 
set `buf_length` to zero if the response was empty.
[*] It then uses `MJsonNode::MakeArrayBuffer` to pass ownership of the memory to 
an `MJsonNode`, providing `buf_length` as the size.
[*] When the `MJsonNode` is destructed at 
[URL=https://bitbucket.org/tmidas/mjson/src/9d01b3f72722bbf7bcec32ae218fcc0825cc
9e7f/mjson.cxx#lines-657]mjson.cxx#lines-657[/URL], it only calls `free` on the 
buffer if the size is greater than zero.
[/LIST]

Hence, mhttpd will leak at least 1024 bytes for every binary RPC call that 
returns an empty response.
I tried to submit a pull request to fix this but I don't have permission to push 
to https://bitbucket.org/tmidas/mjson.git.  Could somebody take a look?

Thanks,

Mark.[/quote]
  3049   04 Jun 2025 Mark GrimesBug ReportMemory leak in mhttpd binary RPC code
Hi,
During an evening of running we noticed that memory usage of mhttpd grew to close to 100Gb. We think we've traced this to the following issue when making RPC calls.

  • The brpc method allocates memory for the response at src/mjsonrpc.cxx#lines-3449.
  • It then makes the call at src/mjsonrpc.cxx#lines-3460, which may set `buf_length` to zero if the response was empty.
  • It then uses `MJsonNode::MakeArrayBuffer` to pass ownership of the memory to an `MJsonNode`, providing `buf_length` as the size.
  • When the `MJsonNode` is destructed at mjson.cxx#lines-657, it only calls `free` on the buffer if the size is greater than zero.

Hence, mhttpd will leak at least 1024 bytes for every binary RPC call that returns an empty response.
I tried to submit a pull request to fix this but I don't have permission to push to https://bitbucket.org/tmidas/mjson.git. Could somebody take a look?

Thanks,

Mark.
  3048   03 Jun 2025 Thomas LindnerInfoMIDAS workshop (online) Sept 22-23, 2025
Dear all,

We have setup an indico page for the MIDAS workshop on Sept 22-23.  The page is here

https://indico.psi.ch/event/17580/overview

As I mentioned, we are keen to hear reports from any users or developers; we want to hear  
how MIDAS is working for you and what improvements you would like to see.  If you or your 
experiment would like to give a talk about your MIDAS experiences then please submit an 
abstract through the indico page.  

Also, feel free to also register for the workshop (no fees).  Registration is not 
mandatory, but it would be useful for us to have an idea how many people will connect.

Thanks,
Thomas


> Dear MIDAS enthusiasts,
> 
> We are planning a fifth MIDAS workshop, following on from previous successful 
> workshops in 2015, 2017, 2019 and 2023.  The goals of the workshop include:
> 
> - Getting updates from MIDAS developers on new features, bug fixes and planned 
> changes.
> - Getting reports from MIDAS users on how they are using MIDAS and what problems 
> they are facing.
> - Making plans for future MIDAS changes and improvements
> 
> We are planning to have an online workshop on Sept 22-23, 2025 (it will coincide 
> with a visit of Stefan to TRIUMF).  We are tentatively planning to have a four 
> hour session on each day, with the sessions timed for morning in Vancouver and 
> afternoon/evening in Europe.  Sorry, the sessions are likely to again not be well 
> timed for our colleagues in Asia.  
> 
> We will provide exact times and more details closer to the date.  But I hope 
> people can mark the dates in their calendars; we are keen to hear from as much of 
> the MIDAS community as possible.  
> 
> Best Regards,
> Thomas Lindner
  3047   27 May 2025 Pavel MuratSuggestionhandling of 2+ like-long messages
Dear MIDAS experts, 

currently, the MIDAS messaging system is optimized for one-line long messages, 
so the content of 2+liners shows up in the message log in the reverse order, 
with the first line on the bottom of the message. 

I wonder if printing the message content in the reverse order, starting from the last line, 
would make sense ? - that wouldn't affect one-line long messages, but could make longer 
messages more useful. 

--thanks, regards, Pasha
  3046   26 May 2025 Stefan RittForumReading two devices in parallel
> Dear experts,
>       in the CYGNO experiment, we readout CMOS cameras for optical readout of GEM-TPCs. So far, we only developed the readout for a single camera. In the future, we will have multiple cameras to read out (up to 18 in the next phase of the experiment), and we are investigating how to optimize the readout by exploiting parallelization.
> 
> One idea is to start parallel threads within a single equipment. Alternatively, one could associate different equipment with each camera and run an Event Builder. Perhaps other solutions did not come to mind. Which one would you regard as the most effective and elegant?
> 
> Thank you very much,
>            Francesco

In principle both will work. It's kind of matter of taste. In the multi-threaded approach one has a single frontend to start and stop, and in the second case you have to start 18 individual frontends and make sure that they are running. 

For the multi-threaded frontend you have to ensure proper synchronization between the threads (like common run start/stop), and in the end you also have to do some event building, sending all 18 streams into a single buffer. As you know, multi-thread programming can be a bit of an art, using mutexes or semaphores, but it can be more flexible as the event builder which is a given piece of software.

Best,
Stefan
  3045   26 May 2025 Francesco RengaForumReading two devices in parallel
Dear experts,
      in the CYGNO experiment, we readout CMOS cameras for optical readout of GEM-TPCs. So far, we only developed the readout for a single camera. In the future, we will have multiple cameras to read out (up to 18 in the next phase of the experiment), and we are investigating how to optimize the readout by exploiting parallelization.

One idea is to start parallel threads within a single equipment. Alternatively, one could associate different equipment with each camera and run an Event Builder. Perhaps other solutions did not come to mind. Which one would you regard as the most effective and elegant?

Thank you very much,
           Francesco
  3044   25 May 2025 Pavel MuratBug Reportsubdirectory ordering in ODB browser ?
Dear MIDAS experts, 

I'm running into a minor but annoying issue with the subdirectory name ordering by the ODB browser. 
I have a straw-man hash map which includes ODB subdirectories named "000", "010", ... "300", 
and I'm yet to succeed to have them displayed in a "natural" order: the subdirectories with names 
starting from "0" always show up on the bottom of the list - see attached .png file. 

Neither interactive re-ordering nor manual ordering of the items in the input .json file helps. 

I have also attached a .json file which can be loaded with odbedit to reproduce the issue. 

Although I'm using a relatively recent - ~ 20 days old - commit, 'db1819ac', is it possible 
that this issue has already been sorted out ?

-- many thanks, regards, Pasha  
  3043   24 May 2025 Pavel MuratInfoROOT scripting for MIDAS seems to work pretty much out of the box
Dear All, 

I'm pretty sure many know this already, however I found this feature by a an accident 
and want to share with those who don't know about it yet - seems very useful. 

- it looks that one can use ROOT scripting with rootcling and call from the 
  interactive ROOT prompt any function defined in midas.h and access ODB seemingly 
  WITHOUT DOING anything special 

- more surprisingly, that also works for odbxx, with one minor exception in handling 
  the 64-bit types - the proof is in attachment. The script test_odbxx.C loaded 
  interactively is Stefan's

 https://bitbucket.org/tmidas/midas/src/develop/examples/odbxx/odbxx_test.cxx 

with one minor change - the line 
 
   o[Int64 Key] = -1LL;

is replaced with

   int64_t x = -1LL;
   o["Int64 Key"] = x;

- apparently the interpeter has its limitations. 

My rootlogon.C file doesn't load any libraries, it only defines the appropriate 
include paths. So it seems that everything works pretty much out of the box. 

One issue has surfaced however. All that worked despite my experiment 
had its name="test_025", while the example specifies experiment="test". 
Is it possible that that only first 4 characters are being tested ? 

-- regards, Pasha
  3042   19 May 2025 Jonas A. KriegerSuggestionmanalyzer root output file with custom filename including run number
Hi all,

Would it be possible to extend manalyzer to support custom .root file names that include the run number? 

As far as I understand, the current behavior is as follows:
The default filename is ./root_output_files/output%05d.root , which can be customized by the following two command line arguments.

-Doutputdirectory: Specify output root file directory
-Ooutputfile.root: Specify output root file filename

If an output file name is specified with -O, -D is ignored, so the full path should be provided to -O. 

I am aiming to write files where the filename contains sufficient information to be unique (e.g., experiment, year, and run number). However, if I specify it with -O, this would require restarting manalyzer after every run; a scenario that I would like to avoid if possible.

Please find a suggestion of how manalyzer could be extended to introduce this functionality through an additional command line argument at
https://bitbucket.org/krieger_j/manalyzer/commits/24f25bc8fe3f066ac1dc576349eabf04d174deec

Above code would allow the following call syntax: ' ./manalyzer.exe -O/data/experiment1_%06d.root --OutputNumbered '
But note that as is, it would fail if a user specifies an incompatible format such as -Ooutput%s.root . 

So a safer, but less flexible option might be to instead have the user provide only a prefix, and then attach %05d.root in the code.

Thank you for considering these suggestions!
  3041   16 May 2025 Konstantin OlchanskiBug Reporthistory_schema.cxx fails to build
> > we have a CI setup which fails since 06.05.2025 to build the history_schema.cxx.
> > There was a major change in this code in the commits fe7f6a6 and 159d8d3.
> 
> Missing from this report is critical information: HAVE_PGSQL is set.
> 
> I will have to check why it is not set in my development account.
> 

The following is needed to build MySQL and PgSQL support in MIDAS,
they were missing on my development machine. MySQL support was enabled
by accident because kde-bloat packages pull in the MySQL (not the MariaDB)
client and server. Fixed now, added to standard list of Ubuntu packages:
https://daq00.triumf.ca/DaqWiki/index.php/Ubuntu#install_missing_packages

apt -y install mariadb-client libmariadb-dev ### mysql client for MIDAS
apt -y install postgresql-common libpq-dev ### postgresql client for MIDAS

>
> I will have to check why it is not set in our bitbucket build.
> 

Added MySQL and PgSQL to bitbucket Ubuntu-24 build (sqlite was already enabled).

>
> Thank you for reporting this problem.
> 

Fix committed. Sorry about this problem.

K.O.
  3040   16 May 2025 Konstantin OlchanskiBug Reporthistory_schema.cxx fails to build
> we have a CI setup which fails since 06.05.2025 to build the history_schema.cxx.
> There was a major change in this code in the commits fe7f6a6 and 159d8d3.

Missing from this report is critical information: HAVE_PGSQL is set.

I will have to check why it is not set in my development account.

I will have to check why it is not set in our bitbucket build.

Thank you for reporting this problem.

K.O.
  3039   16 May 2025 Marius KoeppelBug Reporthistory_schema.cxx fails to build
Hi all,

we have a CI setup which fails since 06.05.2025 to build the history_schema.cxx. There was a major change in this code in the commits fe7f6a6 and 159d8d3.

image: rootproject/root:latest

pipelines:
  default:
    - step:
        name: 'Build and test'
        runs-on:
          - self.hosted
          - linux
        script:
          - apt-get update
          - DEBIAN_FRONTEND=noninteractive apt-get -y install python3-all python3-pip python3-pytest-dependency python3-pytest
          - DEBIAN_FRONTEND=noninteractive apt-get -y install gcc g++ cmake git python3-all libssl-dev libz-dev libcurl4-gnutls-dev sqlite3 libsqlite3-dev libboost-all-dev linux-headers-generic
          - gcc -v
          - cmake --version
          - git clone https://marius_koeppel@bitbucket.org/tmidas/midas.git
          - cd midas
          - git submodule update --init --recursive
          - mkdir build
          - cd build
          - cmake ..
          - make -j4 install


Error is:

/opt/atlassian/pipelines/agent/build/midas/src/history_schema.cxx:5991:10: error: ‘class HsSqlSchema’ has no member named ‘table_name’; did you mean ‘fTableName’?

 5991 |       s->table_name = xtable_name;

      |          ^~~~~~~~~~

      |          fTableName

/opt/atlassian/pipelines/agent/build/midas/src/history_schema.cxx: In member function ‘virtual int PgsqlHistory::read_column_names(HsSchemaVector*, const char*, const char*)’:

/opt/atlassian/pipelines/agent/build/midas/src/history_schema.cxx:6034:14: error: ‘class HsSqlSchema’ has no member named ‘table_name’; did you mean ‘fTableName’?

 6034 |       if (s->table_name != table_name)

      |              ^~~~~~~~~~

      |              fTableName

/opt/atlassian/pipelines/agent/build/midas/src/history_schema.cxx:6065:16: error: ‘struct HsSchemaEntry’ has no member named ‘fNumBytes’

 6065 |             se.fNumBytes = 0;

      |                ^~~~~~~~~

/opt/atlassian/pipelines/agent/build/midas/src/history_schema.cxx:6140:30: error: ‘__gnu_cxx::__alloc_traits<std::allocator<HsSchemaEntry>, HsSchemaEntry>::value_type’ {aka ‘struct HsSchemaEntry’} has no member named ‘fNumBytes’

 6140 |             s->fVariables[j].fNumBytes = tid_size;

      |                              ^~~~~~~~~

At global scope:

cc1plus: note: unrecognized command-line option ‘-Wno-vla-cxx-extension’ may have been intended to silence earlier diagnostics

make[2]: *** [CMakeFiles/objlib.dir/build.make:384: CMakeFiles/objlib.dir/src/history_schema.cxx.o] Error 1

make[2]: *** Waiting for unfinished jobs....

make[1]: *** [CMakeFiles/Makefile2:404: CMakeFiles/objlib.dir/all] Error 2

make: *** [Makefile:136: all] Error 2
  3038   05 May 2025 Stefan RittInfodb_delete_key(TRUE)
I would handle this actually like symbolic links are handled under linux. If you delete a symbolic link, the link gets 
detected and NOT the file the link is pointing to.

So I conclude that the "follow links" is a misconception and should be removed.

Stefan
  3037   05 May 2025 Stefan RittBug Reportabort and core dump in cm_disconnect_experiment()
I would be in favor of not curing the symptoms, but fixing the cause of the problem. I guess you put the watchdog disable into mhist, right? Usually mhist is called locally, so no mserver should be 
involved. If not, I would prefer to propagate the watchdog disable to the mserver side as well, if that's not been done already. Actually I never would disable the watchdog, but set it to a reasonable 
maximal value, like a few minutes or so. In that case, the client gets still removed if it crashes for some reason.

My five cents,
Stefan 
  3036   05 May 2025 Konstantin OlchanskiBug Reportabort and core dump in cm_disconnect_experiment()
I noticed that some programs like mhist, if they take too long, there is an abort and core dump at the very end. This is because they forgot to 
set/disable the watchdog timeout, and they got remove from odb and from the SYSMSG event buffer.

mhist is easy to fix, just add the missing call to disable the watchdog, but I also see a similar crash in the mserver which of course requires 
the watchdog.

In either case, the crash is in cm_disconnect_experiment() where we know we are shutting down and we know there is no useful information in the 
core dump.

I think I will fix it by adding a flag to bm_close_buffer() to bypass/avoid the crash from "we are already removed from this buffer".

Stack trace from mhist:

[mhist,ERROR] [midas.cxx:5977:bm_validate_client_index,ERROR] My client index 6 in buffer 'SYSMSG' is invalid: client name '', pid 0 should be my 
pid 3113263
[mhist,ERROR] [midas.cxx:5980:bm_validate_client_index,ERROR] Maybe this client was removed by a timeout. See midas.log. Cannot continue, 
aborting...
bm_validate_client_index: My client index 6 in buffer 'SYSMSG' is invalid: client name '', pid 0 should be my pid 3113263
bm_validate_client_index: Maybe this client was removed by a timeout. See midas.log. Cannot continue, aborting...

Program received signal SIGABRT, Aborted.
Download failed: Invalid argument.  Continuing without source file ./nptl/./nptl/pthread_kill.c.
__pthread_kill_implementation (no_tid=0, signo=6, threadid=<optimized out>) at ./nptl/pthread_kill.c:44
warning: 44	./nptl/pthread_kill.c: No such file or directory
(gdb) bt
#0  __pthread_kill_implementation (no_tid=0, signo=6, threadid=<optimized out>) at ./nptl/pthread_kill.c:44
#1  __pthread_kill_internal (signo=6, threadid=<optimized out>) at ./nptl/pthread_kill.c:78
#2  __GI___pthread_kill (threadid=<optimized out>, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3  0x00007ffff71df27e in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#4  0x00007ffff71c28ff in __GI_abort () at ./stdlib/abort.c:79
#5  0x00005555555768b4 in bm_validate_client_index_locked (pbuf_guard=...) at /home/olchansk/git/midas/src/midas.cxx:5993
#6  0x000055555557ed7a in bm_get_my_client_locked (pbuf_guard=...) at /home/olchansk/git/midas/src/midas.cxx:6000
#7  bm_close_buffer (buffer_handle=1) at /home/olchansk/git/midas/src/midas.cxx:7162
#8  0x000055555557f101 in cm_msg_close_buffer () at /home/olchansk/git/midas/src/midas.cxx:490
#9  0x000055555558506b in cm_disconnect_experiment () at /home/olchansk/git/midas/src/midas.cxx:2904
#10 0x000055555556d2ad in main (argc=<optimized out>, argv=<optimized out>) at /home/olchansk/git/midas/progs/mhist.cxx:882
(gdb) 

Stack trace from mserver:

#0  __pthread_kill_implementation (no_tid=0, signo=6, threadid=138048230684480) at ./nptl/pthread_kill.c:44
44	./nptl/pthread_kill.c: No such file or directory.
(gdb) bt
#0  __pthread_kill_implementation (no_tid=0, signo=6, threadid=138048230684480) at ./nptl/pthread_kill.c:44
#1  __pthread_kill_internal (signo=6, threadid=138048230684480) at ./nptl/pthread_kill.c:78
#2  __GI___pthread_kill (threadid=138048230684480, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3  0x00007d8ddbc4e476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#4  0x00007d8ddbc347f3 in __GI_abort () at ./stdlib/abort.c:79
#5  0x000059beb439dab0 in bm_validate_client_index_locked (pbuf_guard=...) at /home/dsdaqdev/packages_common/midas/src/midas.cxx:5993
#6  0x000059beb43a859c in bm_get_my_client_locked (pbuf_guard=...) at /home/dsdaqdev/packages_common/midas/src/midas.cxx:6000
#7  bm_close_buffer (buffer_handle=<optimized out>) at /home/dsdaqdev/packages_common/midas/src/midas.cxx:7162
#8  0x000059beb43a89af in bm_close_all_buffers () at /home/dsdaqdev/packages_common/midas/src/midas.cxx:7256
#9  bm_close_all_buffers () at /home/dsdaqdev/packages_common/midas/src/midas.cxx:7243
#10 0x000059beb43afa20 in cm_disconnect_experiment () at /home/dsdaqdev/packages_common/midas/src/midas.cxx:2905
#11 0x000059beb43afdd8 in rpc_check_channels () at /home/dsdaqdev/packages_common/midas/src/midas.cxx:16317
#12 0x000059beb43b0cf5 in rpc_server_loop () at /home/dsdaqdev/packages_common/midas/src/midas.cxx:15858
#13 0x000059beb4390982 in main (argc=9, argv=0x7ffc07e5bed8) at /home/dsdaqdev/packages_common/midas/progs/mserver.cxx:387

K.O.
  3035   05 May 2025 Konstantin OlchanskiInfodb_delete_key(TRUE)
I was working on an odb corruption crash inside db_delete_key() and I noticed 
that I did not test db_delete_key() with follow_links set to TRUE. Then I noticed 
that nobody nowhere seems to use db_delete_key() with follow_links set to TRUE. 

Instead of testing it, can I just remove it?

This feature existed since day 1 (1st commit) and it does something unexpected 
compared to filesystem "/bin/rm": the best I can tell, it is removes the link 
*and* whatever the link points to. For people familiar with "/bin/rm", this is 
somewhat unexpected and by my thinking, if nobody ever added such a feature to 
"/bin/rm", it is probably not considered generally useful or desirable. (I would 
think it dangerous, it removes not 1 but 2 files, the 2nd file would be in some 
other directory far away from where we are).

By this thinking, I should remove "follow_links" (actually just make it do thing 
, to reduce the disturbance to other source code). db_delete_key() should work 
similar to /bin/rm aka the unlink() syscall.

K.O.
  3034   05 May 2025 Konstantin OlchanskiBug FixBug fix in SQL history
A bug was introduced to the SQL history in 2022 that made renaming of variable names not work. This is now fixed.

break commit:
54bbc9ed5d65d8409e8c9fe60b024e99c9f34a85
fix commit:
159d8d3912c8c92da7d6d674321c8a26b7ba68d4

P.S.

This problem was caused by an unfortunate design of the c++ class system. If I want to add more data to an existing 
class, I write this:

class old_class {
int i,j,k;
}

class bigger_class: public old_class {
int additional_variable;
}

But if I have this:

struct x { int i,j; }

class y {
std::vector<x> array_of_x;
}

and I want to add "k" to "x", c++ has not way to do this. history code has this workaround:

class bigger_y: public y
{
std::vector<int> array_of_k;
}

int bigger_y:foo(int n) {
printf("%d %d %d\", array_of_x[n].i, array_of_x[n].j, array_of_k[n]);
}

problem is that it is not obvious that "array_of_x" and "array_of_k" are connected
and they can easily get out of sync (if elements are added or removed). this is the
bug that happened in the history code. I now added assert(array_of_x.size()==array_of_k.size())
to offer at least some protection going forward.

P.S. As final solution I think I want to completely separate file history and sql history code,
they have more things different than common.

K.O.
  3033   30 Apr 2025 Stefan RittBug ReportODBXX : ODB links in the path names ?
Indeed this was missing from the very beginning. I added it, please report back if it's not working.

Stefan
  3032   30 Apr 2025 Pavel MuratInfoNew ODB++ API
it is a very convenient interface! Does it support the ODB links in the path names ? -- thanks, regards, Pasha
ELOG V3.1.4-2e1708b5