Back Midas Rome Roody Rootana
  Midas DAQ System, Page 24 of 146  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
Entry  28 Aug 2008, Konstantin Olchanski, Info, triumf/t2k midas updates 
Following changes to midas produced from the TRIUMF T2K project have been
committed to svn:
1) cm_shutdown() will now SIGKILL clients that cannot be stopped via normal
means. Previously cm_shutdown() would print a message to the effect "please kill
this client yourself manually". The user action in this case (assuming they did
not issue cm_shutdown() by mistake) has been to find out the client pid using
"ps", kill -KILL it, then "odbedit clean". cm_shutdown() now performs all this
automatically.
2) rpc_send_event() did not correctly detect loss of connection to the remote
mserver (i.e. in case it was killed by cm_shutdown() above). Now, correct error
handling is in place and the remote frontend should gracefully shutdown if
mserver connection is lost. (However I observe that some of my remote frontends
fail to exit unless I do "exit(1);" from my frontend_exit() function.
3) mhttpd bug fixed: when editing odb entries, the "cancel" button did not work
correctly.
4) lazylogger "script" backup type is now fully tested and documented. Example
scripts for writing to dcache are available by request.
5) mlogger and mhttpd changes for writing history data to an sql database are
mostly completed and will be committed after some more debugging. (If you are
interested in details, please contact me directly).
6) (committed some time ago) Makefile changes for cross-compiling midas are now
in: "make linux32", "make linux64", "make crosscompile".
K.O.
Entry  29 Aug 2008, Konstantin Olchanski, Info, history_odbc: store MIDAS history in ODBC/MySQL database 
The code for storing midas history in an odbc sql database has been committed.
Changes:
include/history_odbc.h, src/history_odbc.cxx --- implementation
src/mlogger.c --- call the history_odbc functions
utils/mh2sql.cxx --- import existing midas history files (*.hst) into an odbc
sql database.

This new code is enabled by the HAVE_ODBC gunk in the Makefile. If compilation
bombs, please let me know and as a work around, comment out all instances of
HAVE_ODBC from your Makefile.

Limitations:
- mhttpd support for reading history data from odbc sql database is missing
- many sql functions are implemented in a very minimalistic form (i.e. when
defining a history event, we blindly ask sql to create the tables, even if they
already exist - this works, but spams the midas log with sql errors).
- error handling is incomplete: after any sql error, the odbc connection is closed.
- only MySQL (and ascii output) are supported: we use mysql-specific data types
as they match midas types exactly. Code to support PgSQL is present and it used
to work, but is commented out. (At TRIUMF/T2K, we intend to use MySQL exclusively).
- ODBC ascii interface is used, instead of the potentially more efficient binary
interface.

To enable:
- create a MySQL database,
- create $HOME/.odbc.ini (see attached example)
- set ODB "/History/PerVariableHistory" to "1" - the new code is intended to be
used with per-variable history. Per-equipment (traditional) history would work,
but will result in suboptimal layout of SQL tables.
- set ODB "/Logger/ODBC_DSN" to the DSN defined in .odc.ini.
- set ODB "/Logger/ODBC_Debug" to non-zero to enable debugging output from the
new code.

To use the "ascii output" mode:
Included is code to write "ascii" sql output into a text file, instead of using
an actual SQL database. To enable it, set "ODBC_DSN" to
"/path/to/some/text/file" and all SQL output will be written to this file. No
actual SQL database required. This mode exists mostly for debugging the SQL syntax.

Despite limitations, the committed code is fully functional - we are presently
using it to record history data from slow controls of T2K detector tests
(voltages, currents, temperatures).

Comments and suggestions on naming and mapping from odb structures to SQL tables
is very much welcome.

K.O.
Entry  17 Sep 2008, Stefan Ritt, Info, New flag for auto restart 
A new ODB flag has been introduced. When the logger is configured for automatic 
stop and restart (/Logger/Auto restart = y), the restart delay was hard-wired 
to 20 sec., which might be too long or short for some experiments. Therefore a 
new parameter "/Logger/Auto restart delay" has been introduced which can be 
used to accommodate different delays. A non-zero delay is necessary for 
experiments where some lengthy activities occur during the stop of a run, like 
an analyzer writing many histograms to disk.
Entry  18 Sep 2008, Stefan Ritt, Info, Potential problems in multi-threaded slow control front-end 
We had recently some problems at our experiment which I would like to share 
with the community. This affects however only experiments which have a slow 
control front-end in multi-threaded mode.

The problem is related with the fact that the midas API is not thread safe, so 
a device driver or bus driver from the slow control system may not call any ODB 
function. We found several drivers (mainly psi_separator.c, psi_beamline.c etc) 
which use inside read/write function the midas PAI function cm_msg() to report 
any error. While this is ok for the init section (which is executed in the main 
frontend thread) this is not ok for the read/write function inside the driver. 
If this is done anyhow, it can happen that the main thread locks the ODB (via 
db_lock_database()) and the thread interrupts that call and locks the ODB 
again. In rare cases this can cause a stale lock on the ODB. This blocks all 
other programs to access the ODB and the experiment will die loudly. It is hard 
to identify, since error messages cannot be produced any more, and remote 
programs (not affected by the lock) just show a rpc timeout.

I fixed all drivers now in our experiment which solved the problem for us, but 
I urge other people to double check their device drivers as well.

In case of problems, there is a thread ID check in 
db_lock_database()/db_unlock_database() which can be activated by supplying 

-DCHECK_THREAD_ID

in the compile command line. If then these functions are called from different 
threads, the program aborts with an assertion failure, which can then be 
debugged. 

There is also a stack history system implemented with new functions 
ss_stack_xxxx. Using this system, one can check which functions called 
db_lock_database() *before* an error occurs. Using this system, I identified 
the malicious drivers. Maybe this system can also be used in other error 
debugging scenarios.
Entry  19 Sep 2008, Stefan Ritt, Info, Lazylogger logging changed 
I modified the logging behavior of lazylogger. Originally, it was writing 
messages (run copied, removed, ...) both into midas.log and 
lazy_log_update.log. Since we have many files, it kind of clutters up the 
logging files. I think it is a good idea to have a separate file (which I 
changed not to "lazy.log" instead of "lazy_log_update.log" which I guess was a 
bug), so I put the logging into the main file under a conditional compile:

#ifdef WRITE_MIDAS_LOG
   cm_msg(MINFO, "lazy_log_update", str);
#endif

so it can be turned on again by adding -DWRITE_MIDAS_LOG to the compile line. 
If other experiments have different needs, one could make the logging behavior 
controllable through the ODB. In that case, I would suggest a single parameter 
"Logging file" which can be either "midas.log" for the normal logging or 
"lazy.log" for logging into the extra file. I guess having the messages twice 
on the system is not needed by any experiment.

- Stefan
Entry  03 Oct 2008, Konstantin Olchanski, Info, Implement non-default mserver tcp port numbers. 
midas revision 4342 implements non-default tcp port numbers for the mserver.

To use, run "mserver -p 7070" and say "setenv MIDAS_SERVER_HOST
host.example.com:7070".

This is useful when multiple experiments share the same computer, but one does
not want to setup a global /etc/exptab (non-root users cannot change it) or one
does not want to run the mserver from xinetd (i.e. all experiments run different
versions of midas and cannot use the same common mserver executable).

Changed files:
src/mserver.c
src/midas.c
doxfiles/utilities.dox
doxfiles/appendixD.dox

Revision 4342.

K.O.
Entry  13 Oct 2008, Stefan Ritt, Info, mhttpd multi-experiment support removed 
Previously, one mhttpd server could sever several experiments at the same time. 
This caused however sometimes problems and was hard to maintain. Starting from 
SVN revision 4348, I removed the multi-experiment support, which I believe is 
now a much cleaner implementation. So if several experiments are defined on a 
computer, each one need a separate mhttpd process listening on a different 
port. The experiment name can now be supplied on the command line to mhttpd 
like for any other midas program. I have tested this so far at two experiments 
at PSI, but this does not cover all possibilities. What I did not try was 
experiments with web passwords and odb passwords. If there is any problem after 
upgrading to 4348, please report.
Entry  13 Oct 2008, Konstantin Olchanski, Info, MIDAS drivers for Tundra tsi148 pci-vme bridge 
The latest midas mvmestd.h driver for the Tundra tsi148 pci-vme bridge as used
on GEFANUC VME processors have been commited, revision 4349.

This midas drivers require the "gefvme" Linux kernel driver supplied by GEFANUC
as part of their Linux BSP. (Note that version "v7865-sdk-linux-R01.00" from
GEFANUC is mostly non-functional).

At TRIUMF have the V7865 VME processors and use the kernel driver
v7865-sdk-linux-R01.00-KO6. This driver supports these functions:

1) memory mapped access to full VME A16 and A24 address spaces and window-mapped
access to VME A32 address space. (original gefvme driver does not do
memory-mapped access)
2) DMA directly from vme to user memory, with support for multi-segment chained
transfers (original gefvme driver lacks chained transfers)
3) DMA from user memort to vme should work but is untested
4) no support for interrupts (original gefvme driver does not interrupts).

If you are interested in in using the TRIUMF driver, please contact me directly.

If you already purchased the GEFANUC BSP, I think you can use my drivers
immediately, without objection from GEFANUC.

Otherwise, I will have to do some research into the gefvme code license: since
all of the code appears to have GPL headers and identical code exists on the
internet, I expect to find that my gefvme driver can be freely distributed under
the GPL. But until then, and until it is cleared with TRIUMF management, I
cannot make my gefvme driver available for free download.

K.O.
Entry  17 Oct 2008, Konstantin Olchanski, Info, mlogger async transitions, etc 
As we were looking into problems with starting and stopping runs in one of our
daq systems, we found that the mlogger does something differently compared to
mhttpd and odbedit. Starting and stopping runs from mhttpd and odbedit works
correctly, but runs restarted by the file size limit in mlogger would often have
problems.

It turns out that mlogger calls cm_transition() with the ASYNC flag, while
mhttpd and odbedit always use SYNC.

The best I can tell, the ASYNC flag tells cm_transition() to fire off the
end-run rpc calls to all clients all at once, without waiting for reply from the
previous client before calling the next one. This effectively defeats the
transition sequence numbers - higher-numbered clients are told to end-run before
the lower-numbered clients have finished their end-run processing.

Most of the time, transition sequence numbers do not matter - all frontends can
stop at the same time, only mlogger has to be the very last, and for transitions
initiated by the mlogger itself, this sequencing is preserved.

It turns out that for our system, correct sequencing of individual frontends is
important, for example, the frontend controlling the trigger system has to stop
first. As we are using correctly adjusted transition sequence numbers, the right
sequence is always done when runs are started/stopped from mhttpd and from
odbedit, but not for runs started/stopped by the mlogger.

So by changing mlogger to always do SYNC transitions, we fixed our sequencing
problem - now runs always start and stop correctly.

But then we ran into a deadlock between the mlogger and the event builder:

1) mlogger wants to stop the run
2a) mlogger stops reading the SYSTEM buffer
2b) mlogger starts cm_transition(SYNC)
3) rpc call to trigger frontend, trigger is blocked (no new events are
generated, but existing data is still flowing through the system)
4) other frontends are stopped (data still flowing)
5) data still flowing through the system, into the event builder, into the
SYSTEM buffer
6) SYSTEM buffer becomes 100% full (mlogger is not reading it, it is busy inside
cm_transition()), event builder is waiting for free space inside bm_send_event()
7) mlogger issues end-run rpc call to event builder
8) deadlock: mlogger is waiting for a reply from the event builder, the event
builder is waiting for free space in the SYSTEM buffer (not processing rpc
calls), mlogger is supposed to empty the SYSTEM buffer, but it is waiting for an
rpc reply instead.

In our particular case, the dead lock was easy to avoid by making the SYSTEM
buffer big enough to accommodate all in-flight data, but the problem remains in
the general case. I suspect mlogger uses ASYNC transactions exactly to avoid
this type of deadlock (mlogger used ASYNC transactions since svn revision 2, the
beginning of time).

Personally, I am not happy about the inconsistency of run sequencing between
mlogger and mhttpd/odbedit (hmm... should also check mfe.c, it also stops runs
based on event count limits, etc). I think it would be better if all programs
did the same exact thing when starting/stopping runs. When mlogger does
something different, we get surprising unexpected behaviour, best avoided.

One possible solution could be to add an odb variable "/logger/async
transitions", set to "false" by default - to be consistent with other programs.
Systems that benefit from the old ASYNC behaviour and do not care about exact
sequencing can set this flag to "true".

K.O.
    Reply  18 Oct 2008, Stefan Ritt, Info, mlogger async transitions, etc 
> I suspect mlogger uses ASYNC transactions exactly to avoid
> this type of deadlock (mlogger used ASYNC transactions since svn revision 2, the
> beginning of time).

That's exactly the case. If you would have asked me, I would have told you 
immediately, but it is also good that you re-confirmed the deadlock behavior with 
the SYNC flag. I didn't check this for the last ten years or so.

Making the buffers bigger is only a partial solution. Assume that the disk gets 
slow for some reason, then any buffer will fill up and you get the dead lock.

The only real solution is to put the logic into a separate thread. So the thread 
does all the RPC communication with the clients, while the main logger thread logs 
data as usual in parallel. The problem is that the RPC layer is not yet completely 
tested to be thread safe. I put some mutex and you correctly realized that these 
are system wide, but you want a local mutex just for the logger process. You need 
also some basic communication between the "run stop thread" and the "logger main 
thread". Maybe Pierre remembers that once there was the problem that the logger did 
not know when all events "came down the pipe" and could close the file. He added 
some delay which helped most of the time. But if we would have some communication 
from the "run stop thread" telling the main thread that all programs except the 
logger have stopped the run, then the logger only has to empty the local system 
buffer and knows 100% that everything is done.

In the MEG experiment we have the same problem. We need a certain sequence 
(basically because we have 9 front-ends and one event builder, which has to be 
called after the front-ends). We realized quickly that the logger cannot stop the 
run, so we wrote a little tool "RunSubmit", which is a run sequence with scripting 
facility. So you write a XML file, telling RunSubmit to start 10 runs, each with 
5000 events. RunSubmit now watches the run statistics and stops the run. Since it's 
outside the logger process, there is no dead lock. Unfortunately RunSubmit was 
written by one of our students and contains some MEG specific code. Otherwise it 
could be committed to the distribution.

So I feel that a separate thread for run stop (and maybe even start) would be a 
good thing, but I'm not sure when I will have time to address this issue.

- Stefan
Entry  18 Oct 2008, Konstantin Olchanski, Info, make linux32 & co 
The Makefile targets for crosscompiling MIDAS are now documented in the MIDAS
Doxygen documentation:

make linux32 & make clean32
make linux64 & make clean64
make crosscompile
make dox

This has to do with which flavour of MIDAS is built by default: 32-bit or 64-bit.

This is how this works now.

Default flavour is determined by ROOT. If ROOTSYS points to 32-bit ROOT, then
32-bit MIDAS is built, if 64-bit ROOT, then 64-bit MIDAS. This works well after
the ROOT team added the correct "-m32" and "-m64" flags to "rootconfig --cflags".

If for some reason, we also need a non-default flavour of MIDAS, for example
when the main daq computer runs 64-bit MIDAS, but one frontend has to run on a
"32-bit only" VME processor, you say "make linux32". This creates the
"linux-m32/{lib,bin}" tree that you then reference in the Makefile of your
special frontend (i.e. instead of "-L$MIDASSYS/linux/lib" say
"-L$MIDASSYS/linux-m32/lib"). "make linux64" works the same way.

These non-default flavours of MIDAS are compiled with most special features
disabled: no ROOT, no MYSQL, etc.

When building "make linux32", you may also see errors caused by missing 32-bit
libraries - many 64-bit Linux distributions do not install the full 32-bit
development environment by default - so some header files and libraries may be
reported as missing. These not-installed-by-default 32-bit packages are usually
easy to install using commands like "yum install libxxx-devel.i386".

K.O.
Entry  22 Oct 2008, Konstantin Olchanski, Info, mscb timeouts and retries 
A new set of functions was added to mscb.h to adjust mscb timeouts and retries to better match specific 
applications:

+   int EXPRT mscb_get_max_retry();
+   int EXPRT mscb_set_max_retry(int max_retry);
+   int EXPRT mscb_get_usb_timeout();
+   int EXPRT mscb_set_usb_timeout(int timeout);
+   int EXPRT mscb_get_eth_max_retry();
+   int EXPRT mscb_set_eth_max_retry(int eth_max_retry);

There are 3 settings:

1) mscb_max_retry: most (all?) mscb operations, like mscb_read(), retry failed mscb transactions up to 
10 times. The corresponding set and get functions allow tuning this retry limit.

2) mscb_usb_timeout: the driver for the USB-MSCB adapter uses a timeout of 6 seconds. 
mscb_set_usb_timeout() permits changing this value.

3) mscb_eth_max_retry: the driver for the Ethernet-MSCB adapter has to deal with UDP packet loss. If 
the adapter does not respond to a UDP command, the UDP command is sent again, with a bigger 
timeout (timeout = 100 * (retry+1), in ms), this is repeated up to 10 times. mscb_set_eth_max_retry() 
permits adjusting this number of retries.

This is how it works for the usb interface:

int mscb_read(...)
   for (retry=0; retry<mscb_max_retry; retry++)
       mscb_exch()
            musb_write(..., mscb_usb_timeout)
            musb_read(..., mscb_usb_timeout)     

This is how it works for the ethernet interface:

int mscb_read(...)
   for (retry=0; retry<mscb_max_retry; retry++)
       mscb_exch()
            for (retry=0; retry<mscb_eth_max_retry; retry++)
                 send_udp_command()
                 wait_for_udp_response(timeout = 100 * (retry+1))

This is how the new functions are intended to be used:
   ...
   int old = mscb_set_max_retry(2);
   ... do stuff ...
   mscb_set_max_retry(old); // restore default value

svn revision 4356.
K.O.
    Reply  28 Oct 2008, Stefan Ritt, Info, mscb timeouts and retries 
> A new set of functions was added to mscb.h to adjust mscb timeouts and retries to better match specific 
> applications:
> 
> +   int EXPRT mscb_get_max_retry();
> +   int EXPRT mscb_set_max_retry(int max_retry);
> +   int EXPRT mscb_get_usb_timeout();
> +   int EXPRT mscb_set_usb_timeout(int timeout);
> +   int EXPRT mscb_get_eth_max_retry();
> +   int EXPRT mscb_set_eth_max_retry(int eth_max_retry);

In the spirit of this, a variable retry scheme has been implemented in the mscbdev.c device driver. At the 
MEG experiment, we have one mscb device which is pretty slow, while the others are fast. Therefore it is 
necessary to have a per-device max retry count which can be different for different submasters. I moved 
therefore the max_eth_retry variable into the mscb_fd structure and adjusted a few functions accordingly. I 
did not bother with the other timeouts and retries, since I don't need this for the moment, but it would be 
nice if they would be handled in the same way. Then I added code into mscbdev.c to read the retry variable 
form the ODB under /Equipment/<name>/Settings/Device/<Name>/Retries. The default is 10, but it can be 
changed and becomes valid after the program has been restarted. 
Entry  06 Nov 2008, Konstantin Olchanski, Info, midas elog outage 
Around Wednesday Noon, there was a power outage at triumf (loss of ups power in the triumf 
computing center) and after rebooting ladd00, https/ssl access stopped working with a complaint 
about mismatching server name and ssl certificate name. This configuration used to work, so one of the 
system updated must have broke it. This problem is now fixed and access to midas elog is restored. 
K.O.
Entry  20 Nov 2008, Jimmy Ngai, Info, Recommended platform for running MIDAS 
Dear All,

Is there any recommended platforms for running MIDAS? Have anyone encountered 
problems when running MIDAS on Scientific Linux?

Thanks.

Jimmy
    Reply  20 Nov 2008, Stefan Ritt, Info, Recommended platform for running MIDAS 
> Dear All,
> 
> Is there any recommended platforms for running MIDAS? Have anyone encountered 
> problems when running MIDAS on Scientific Linux?
> 
> Thanks.
> 
> Jimmy

I run MIDAS on scientific Linux 5.1 without any problem.
Entry  26 Nov 2008, Jimmy Ngai, Info, Send email alert in alarm system 
Dear All,

We have a temperature/humidity sensor in MIDAS now and will add a liquid level 
sensor to MIDAS soon. We want the operators to get alerted ASAP when the 
laboratory environment or the liquid level reached some critical levels. Can 
MIDAS send email alerts or SMS alerts to cell phones when the alarms are 
triggered? If yes, how can I config it?

Many thanks!

Best Regards,
Jimmy
    Reply  26 Nov 2008, Stefan Ritt, Info, Send email alert in alarm system 
> We have a temperature/humidity sensor in MIDAS now and will add a liquid level 
> sensor to MIDAS soon. We want the operators to get alerted ASAP when the 
> laboratory environment or the liquid level reached some critical levels. Can 
> MIDAS send email alerts or SMS alerts to cell phones when the alarms are 
> triggered? If yes, how can I config it?

Sure that's possible, that's why MIDAS contains an alarm system. To use it, define 
an ODB alarm on your liquid level, like

/Alarms/Alarms/Liquid Level
Active	                 y
Triggered	         0 (0x0)
Type	                 3 (0x3)
Check interval	        60 (0x3C)
Checked last	1227690148 (0x492D10A4)
Time triggered first	(empty)
Time triggered last	(empty)
Condition	        /Equipment/Environment/Variables/Input[0] < 10
Alarm Class	        Level Alarm
Alarm Message	        Liquid Level is only %s

The Condition if course might be different in your case, just select the correct 
variable from your equipment. In this case, the alarm triggers an alarm of class 
"Level Alarm". Now you define this alarm class:

/Alarms/Classes/Level Alarm
Write system message	y
Write Elog message	n
System message interval	600 (0x258)
System message last	0 (0x0)
Execute command	        /home/midas/level_alarm '%s'
Execute interval	1800 (0x708)
Execute last	        0 (0x0)
Stop run	        n
Display BGColor	        red
Display FGColor	        black

The key here is to call a script "level_alarm", which can send emails. Use 
something like:

#/bin/csh
echo $1 | mail -s \"Level Alarm\" your.name@domain.edu
odbedit -c 'msg 2 level_alarm \"Alarm was sent to your.name@domain.edu\"'

The second command just generates a midas system message for confirmation. Most 
cell phones (depends on the provider) have an email address. If you send an email 
there, it gets translated into a SMS message.

The script file above can of course be more complicated. We use a perl script 
which parses an address list, so everyone can register by adding his/her email 
address to that list. The script collects also some other slow control variables 
(like pressure, temperature) and combines this into the SMS message.

For very sensitive systems, having an alarm via SMS is not everything, since the 
alarm system could be down (computer crash or whatever). In this case we use 
'negative alarms' or however you might call it. The system sends every 30 minutes 
an SMS with the current levels etc. If the SMS is missing for some time, it might 
be an indication that something in the midas system is wrong and one can go there 
and investigate.
Entry  27 Nov 2008, Konstantin Olchanski, Info, lazylogger updated 
lazylogger was updated to improve handling of the list of runs still on disk
(odb /Lazy/xxx/List).

Previously, each and every run was listed in the List arrays. With modern
Terabyte-sized data disks, many many days worth of runs tend to remain on disk
and these List arrays were getting too big, inflating the size of ODB dumps
written by mlogger into the output data file and slowing down starting and
stopping of runs considerably.

Now, the runs are listed as ranges of "first run" - "last run", (see example below).

This significantly reduces the size of the "List" arrays and makes lazylogger
usable for the ALPHA experiment at CERN and for T2K/ND280 prototype DAQ at
TRIUMF (writing to Castor and Dcache respectively, using the newly added
"Script" method).

The new List format is fully compatible with the old format and you can update
and run the new lazylogger without changing anything in ODB. New runs will be
added to the List arrays in the new format and data in the old format will
eventually go away as old runs are removed from disk.

svn revision 4394.
K.O.

Example: this reads like this:
range from 7100 to 7154
range from 7157 to 7161 (7155-7156 are missing)
range from 7163 to 7168 (7162 is missing)
runs 7170, 7173, 7176
range from 7179 to 7182
and so forth.

ODB /Lazy/Dcache/List
007100
[0] 7100 (0x1BBC)
[1] -7154 (0xFFFFE40E)
[2] 7157 (0x1BF5)
[3] -7161 (0xFFFFE407)
[4] 7163 (0x1BFB)
[5] -7168 (0xFFFFE400)
[6] 7170 (0x1C02)
[7] 7173 (0x1C05)
[8] 7176 (0x1C08)
[9] 7179 (0x1C0B)
[10] -7182 (0xFFFFE3F2)
[11] 7184 (0x1C10)
[12] 7188 (0x1C14)
[13] -7199 (0xFFFFE3E1)
007200
[0] 7200 (0x1C20)
[1] -7225 (0xFFFFE3C7)
    Reply  27 Nov 2008, Konstantin Olchanski, Info, Fixed mlogger crash, was Per-variable history implementation in the mlogger 
> revision 4142+4143 are minor fixes, refactoring (switch the code to use helper
> functions) and implementation of history for structured banks

The implementation of "history for structured banks" had a bug - tags inside
structured banks were counted incorrectly, leading to memory overwrites and mlogger
crash in open_history().

This is problem is now fixed (plus added assert() checks to crash-out if overwrite of
tags[] array is detected).

svn revision 4398.
K.O.
ELOG V3.1.4-2e1708b5