Back Midas Rome Roody Rootana
  Midas DAQ System, Page 101 of 146  Not logged in ELOG logo
ID Date Author Topicdown Subject
  802   15 Jun 2012 Konstantin OlchanskiBug Reportbk_delete uses memcpy instead of memmove
> In midas.c, the bk_delete function removes a bank by decrementing the total
> event size and then copying the remaining banks into the location of the first
> using memcpy from string.h.

Replaced some memcpy() with memmove(), including bk_delete().

svn rev 5293
K.O.
  803   15 Jun 2012 Konstantin OlchanskiBug Report_net_send_buffer realloc
> 2) cm_disconect_experiment() calls free(_net_send_buffer) but does not set its 
> value to NULL.

Set pointer to NULL after free() in these files:

M       odb.c
M       sequencer.cxx
M       mlogger.cxx
M       mhttpd.cxx
M       midas.c

svn rev 5294
K.O.
  808   21 Jun 2012 Stefan RittBug ReportCannot start/stop run through mhttpd
> I agree. Somehow mhttpd cannot run mtransition. I am not super happy with this dependance on user $PATH settings and the inability to capture error messages 
> from attempts to start mtransition. I am now thinking in the direction of running mtransition code by forking. But remember that mlogger and the event builder also
> have to use mtransition to stop runs (otherwise they can dead-lock). So an mhttpd-only solution is not good enough...

The way to go is to make cm_transition multi-threaded. Like on thread for each client to be contacted. This way the transition can go in parallel when there are many frontend computers for example, which will speed up 
transitions significantly. In addition, cm_transition should execute a callback whenever a client succeeded or failed, so to give immediate feedback to the user. I think of something like implementing WebSockets in mhttpd for that (http://en.wikipedia.org/wiki/WebSocket).

I have this in mind since many years, but did not have time to implement it yet. Maybe on my next visit to TRIUMF?

Stefan
  819   04 Jul 2012 Konstantin OlchanskiBug ReportCrash after recursive use of rpc_execute()
I am looking at a MIDAS kaboom when running out of space on the data disk - everything was freezing 
up, even the VME frontend crashed sometimes.

The freeze was traced to ROOT use in mlogger - it turns out that ROOT intercepts many signal handlers, 
including SIGSEGV - but instead of crashing the program as God intended, ROOT SEGV handler just hangs, 
and the rest of MIDAS hangs with it. One solution is to always build mlogger without ROOT support - 
does anybody use this feature anymore? Or reset the signal handlers back to the default setting somehow.

Freeze fixed, now I see a crash (seg fault) inside mlogger, in the newly introduced memmove() function 
inside the MIDAS RPC code rpc_execute(). memmove() replaced memcpy() in the same place and I am 
surprised we did not see this crash with memcpy().

The crash is caused by crazy arguments passed to memmove() - looks like corrupted RPC arguments 
data.

Then I realized that I see a recursive call to rpc_execute(): rpc_execute() calls tr_stop() calls cm_yield() calls 
ss_suspend() calls rpc_execute(). The second rpc_execute successfully completes, but leave corrupted 
data for the original rpc_execute(), which happily crashes. At the moment of the crash, recursive call to 
rpc_execute() is no longer visible.

Note that rpc_execute() cannot be called recursively - it is not re-entrant as it uses a global buffer for RPC 
argument processing. (global tls_buffer structure).

Here is the mlogger stack trace:

#0  0x00000032a8032885 in raise () from /lib64/libc.so.6
#1  0x00000032a8034065 in abort () from /lib64/libc.so.6
#2  0x00000032a802b9fe in __assert_fail_base () from /lib64/libc.so.6
#3  0x00000032a802bac0 in __assert_fail () from /lib64/libc.so.6
#4  0x000000000041d3e6 in rpc_execute (sock=14, buffer=0x7ffff73fc010 "\340.", convert_flags=0) at 
src/midas.c:11478
#5  0x0000000000429e41 in rpc_server_receive (idx=1, sock=<value optimized out>, check=<value 
optimized out>) at src/midas.c:12955
#6  0x0000000000433fcd in ss_suspend (millisec=0, msg=0) at src/system.c:3927
#7  0x0000000000429b12 in cm_yield (millisec=100) at src/midas.c:4268
#8  0x00000000004137c0 in close_channels (run_number=118, p_tape_flag=0x7fffffffcd34) at 
src/mlogger.cxx:3705
#9  0x000000000041390e in tr_stop (run_number=118, error=<value optimized out>) at 
src/mlogger.cxx:4148
#10 0x000000000041cd42 in rpc_execute (sock=12, buffer=0x7ffff73fc010 "\340.", convert_flags=0) at 
src/midas.c:11626
#11 0x0000000000429e41 in rpc_server_receive (idx=0, sock=<value optimized out>, check=<value 
optimized out>) at src/midas.c:12955
#12 0x0000000000433fcd in ss_suspend (millisec=0, msg=0) at src/system.c:3927
#13 0x0000000000429b12 in cm_yield (millisec=1000) at src/midas.c:4268
#14 0x0000000000416c50 in main (argc=<value optimized out>, argv=<value optimized out>) at 
src/mlogger.cxx:4431


K.O.
  820   04 Jul 2012 Konstantin OlchanskiBug ReportCrash after recursive use of rpc_execute()
>  ... I see a recursive call to rpc_execute(): rpc_execute() calls tr_stop() calls cm_yield() calls 
> ss_suspend() calls rpc_execute()
> ... rpc_execute() cannot be called recursively - it is not re-entrant as it uses a global buffer

It turns out that rpc_server_receive() also need protection against recursive calls - it also uses
a global buffer to receive network data.

My solution is to protect rpc_server_receive() against recursive calls by detecting recursion and returning SS_SUCCESS (to ss_suspend()).

I was worried that this would cause a tight loop inside ss_suspend() but in practice, it looks like ss_suspend() tries to call
us about once per second. I am happy with this solution. Here is the diff:


@@ -12813,7 +12815,7 @@
 
 
 /********************************************************************/
-INT rpc_server_receive(INT idx, int sock, BOOL check)
+INT rpc_server_receive1(INT idx, int sock, BOOL check)
 /********************************************************************\
 
   Routine: rpc_server_receive
@@ -13047,7 +13049,28 @@
    return status;
 }
 
+/********************************************************************/
+INT rpc_server_receive(INT idx, int sock, BOOL check)
+{
+  static int level = 0;
+  int status;
 
+  // Provide protection against recursive calls to rpc_server_receive() and rpc_execute()
+  // via rpc_execute() calls tr_stop() calls cm_yield() calls ss_suspend() calls rpc_execute()
+
+  if (level != 0) {
+    //printf("*** enter rpc_server_receive level %d, idx %d sock %d %d -- protection against recursive use!\n", level, idx, sock, check);
+    return SS_SUCCESS;
+  }
+
+  level++;
+  //printf(">>> enter rpc_server_receive level %d, idx %d sock %d %d\n", level, idx, sock, check);
+  status = rpc_server_receive1(idx, sock, check);
+  //printf("<<< exit rpc_server_receive level %d, idx %d sock %d %d, status %d\n", level, idx, sock, check, status);
+  level--;
+  return status;
+}
+
 /********************************************************************/
 INT rpc_server_shutdown(void)
 /********************************************************************\


ladd02:trinat~/packages/midas>svn info src/midas.c
Path: src/midas.c
Name: midas.c
URL: svn+ssh://svn@savannah.psi.ch/repos/meg/midas/trunk/src/midas.c
Repository Root: svn+ssh://svn@savannah.psi.ch/repos/meg/midas
Repository UUID: 050218f5-8902-0410-8d0e-8a15d521e4f2
Revision: 5297
Node Kind: file
Schedule: normal
Last Changed Author: olchanski
Last Changed Rev: 5294
Last Changed Date: 2012-06-15 10:45:35 -0700 (Fri, 15 Jun 2012)
Text Last Updated: 2012-06-29 17:05:14 -0700 (Fri, 29 Jun 2012)
Checksum: 8d7907bd60723e401a3fceba7cd2ba29

K.O.
  821   13 Jul 2012 Stefan RittBug ReportCrash after recursive use of rpc_execute()
> Then I realized that I see a recursive call to rpc_execute(): rpc_execute() calls tr_stop() calls cm_yield() calls 
> ss_suspend() calls rpc_execute(). The second rpc_execute successfully completes, but leave corrupted 
> data for the original rpc_execute(), which happily crashes. At the moment of the crash, recursive call to 
> rpc_execute() is no longer visible.

This is really strange. I did not protect rpc_execute against recursive calls since this should not happen. rpc_server_receive() is linked to rpc_call() on the client side. So there cannot be 
several rpc_call() since there I do the recursive checking (also multi-thread checking) via a mutex. See line 10142 in midas.c. So there CANNOT be recursive calls to rpc_execute() because 
there cannot be recursive calls to rpc_server_receive(). But apparently there are, according to your stack trace.

So even if your patch works fine, I would like to know where the recursive calls to rpc_server_receive() come from. Since we have one subproces of mserver for each client, there should only 
be one client connected to each mserver process, and the client is protected via the mutex in rpc_call(). Can you please debug this? I would like to understand what is going on there. Maybe 
there is a deeper underlying problem, which we better solve, otherwise it might fall back on use in the future.

For debugging, you have to see what commands rpc_call() send and what rpc_server_receive() gets, maybe by writing this into a common file together with a time stamp.

SR
  827   16 Aug 2012 Cheng-Ju LinBug Reportlaunching roody kills the analyzer
Hi All,

I've installed midas (Rev:5294) on SLC6.3 (64bit), along with recent trunk versions of rootana and roody. 
All the packages compiled OK. The example code in $MIDASSYS/examples/experiment also runs OK 
provided that I don't launch roody. If I try to launch roody, then it immediately crashes the analyzer with 
the following trace:

#6 root_server_thread (arg=ox7f54fc001150) at src/mana.c:5154
#7 0x0000003219a1e13a in TThread::Function(void*) () from /usr/lib64/root/libThread.so.5.28
#8 0x0000003dd1207851 in start_thread () from /lib64/libpthread.so.0
#9 0x0000003dd0ee76dd in clone () from /lib64/libc.so.6

The line src/mana.c:5154 points to the following:

TObject *obj;
            if (strncmp(request + 10, "Any", 3) == 0)
               obj = folder->FindObjectAny(request + 14);
            else
               obj = folder->FindObject(request + 11);    // LINE 5154


Any suggestions on what may be going on here?  Thanks.


Cheng-Ju
  829   17 Aug 2012 Konstantin OlchanskiBug Reportlaunching roody kills the analyzer
> I've installed midas (Rev:5294) on SLC6.3 (64bit), along with recent trunk versions of rootana and roody. 
>
> #6 root_server_thread (arg=ox7f54fc001150) at src/mana.c:5154

You are connecting to mana, the old midas analyzer. The code for connecting to it is still present in roody,
but I cannot support the matching server code in mana.c - it is 2 revolutions behind the current state of
the ROOT object server (look in ROOTANA - the NetDirectory stuff and the latest is the XmlServer stuff).

I can offer 2 solutions - switch from mana.c to a ROOTANA based analyzer or graft the XmlServer code
into your analyzer (it is very simple - you need to create an XmlServer object and tell it which ROOT
containers you want to make visible to ROODY).

I guess you can also debug the old midas server code inside mana.c...

K.O.
  830   17 Aug 2012 Cheng-Ju LinBug Reportlaunching roody kills the analyzer
Hi Konstantin,

Many thanks for your feedback.  I was able to keep the analyzer from exiting when launching roody by making some changes in the roody code. 
This at least allows me to keep moving forward. I will look into your suggestion of converting to ROOTANA based analyzer as well.

Regards,

Cheng-Ju


> > I've installed midas (Rev:5294) on SLC6.3 (64bit), along with recent trunk versions of rootana and roody. 
> >
> > #6 root_server_thread (arg=ox7f54fc001150) at src/mana.c:5154
> 
> You are connecting to mana, the old midas analyzer. The code for connecting to it is still present in roody,
> but I cannot support the matching server code in mana.c - it is 2 revolutions behind the current state of
> the ROOT object server (look in ROOTANA - the NetDirectory stuff and the latest is the XmlServer stuff).
> 
> I can offer 2 solutions - switch from mana.c to a ROOTANA based analyzer or graft the XmlServer code
> into your analyzer (it is very simple - you need to create an XmlServer object and tell it which ROOT
> containers you want to make visible to ROODY).
> 
> I guess you can also debug the old midas server code inside mana.c...
> 
> K.O.
  834   06 Sep 2012 shaunBug Report"cannot find recent history file"
Hi, when attempting to access a history window the following message is repeated
over and over in the MIDAS message log:

Thu Sep 6 11:37:16 2012 [mhttpd,ERROR] [history.c:886:hs_count_events,ERROR]
cannot find recent history file
Thu Sep 6 11:38:16 2012 [mhttpd,ERROR] [history.c:886:hs_count_events,ERROR]
cannot find recent history file
Thu Sep 6 11:38:16 2012 [mhttpd,ERROR] [history.c:886:hs_count_events,ERROR]
cannot find recent history file
Thu Sep 6 11:39:16 2012 [mhttpd,ERROR] [history.c:886:hs_count_events,ERROR]
cannot find recent history file
Thu Sep 6 11:39:16 2012 [mhttpd,ERROR] [history.c:886:hs_count_events,ERROR]
cannot find recent history file

It appears to be related to attempting to display a history graph that includes
some time periods that have no recorded history data. When I zoom in so that the
whole graph has data the error message goes away.

The graph displays fine either way, so this error message seems useless. Is
there a way to suppress it?

Thanks
Shaun
  837   26 Sep 2012 Konstantin OlchanskiBug Reportlaunching roody kills the analyzer
> > 
> > I guess you can also debug the old midas server code inside mana.c...
> > 

I ended up doing this. (After receiving some discussion by email).

Remembered that this is an old problem with the old midasServer network
protocol in mana.c - if mana.c is compiled 32-bit, it sends 32-bit pointers, if compiled 64-bit
it sends 64-bit pointers. On the receiving end (in roody), the ROOT TMessage object does not
provide any easy way to tell between them (i.e. object length is reported as 12 or 16 for the two cases).

To make things more interesting, the midasServer code in ROOTANA always sends 32-bit "pointers",
(which are not pointers but 32-bit integer cookies).

I use the ROOTANA midasServer to test ROODY (I have no working mana.c analyzers available),
and ROODY expects to receive 32-bit "pointers", so the two are consistent.

But if I compile my midasServer to send/receive 64-bit "pointers" (cookies), I reproduce this crash. What I can reproduce I can "fix".

If I change the code in ROODY to receive and return 64-bit "pointers" (cookies), both 32-bit and 64-bit midasServer seems to work okey.

This is committed as roody svn rev 248. (https://ladd00.triumf.ca/svn/roody/trunk)

It is the same fix as suggested by Cheng-Ju Stephen Lin [cjslin@lbl.gov].

I hope this helps (or breaks the ROODY midasServer connection for everybody. I hope not).

K.O.
  841   12 Dec 2012 Shaun MeadBug Reportss_thread_kill() kills entire program
Hi, I'm having some trouble getting ss_thread_kill() to work properly. It seems 
to kill the entire program instead 
of just the thread. Here is a test program to show the error:

_________________________________
#include <stdio.h>
#include <stdlib.h>
#include "midas.h"
#include "msystem.h"

INT f(void *param)
{
  for (int x = 0; x < 100; x++)
    sleep(1);
  return 0;
}

int main()
{
  printf("creating thread\n");
  midas_thread_t thr = ss_thread_create(f, NULL);
  sleep(2);
  printf("killing thread\n");
  ss_thread_kill(thr);
  printf("success\n");
  return 0;
}
_________________________________

Makefile:
_________________________________
FLAGS=-g -Wall -DLINUX -DOS_LINUX -I/home/deap/packages/midas/include 
LIBS=-L/home/deap/packages/midas/linux-m64/lib -lmidas -lpthread -lrt -lutil

main.exe: main.cpp 
	g++ $(FLAGS) -o $@ $^ $(LIBS)

_________________________________

Output when run:

_________________________________

[deap@deap04 multithread]$ ./main.exe 
creating thread
killing thread
Killed
[deap@deap04 multithread]$ 
_________________________________

The last "Killed" indicated the whole program got killed, when it should 
actually just kill the thread and then 
print "success".

I noticed the function in system.c uses pthread_kill(). Some google searches 
show me that it may be better to use 
pthread_cancel() (ie http://stackoverflow.com/questions/3438536/when-to-use-
pthread-cancel-and-not-pthread-kill ).


Shaun
  842   13 Dec 2012 Stefan RittBug Reportss_thread_kill() kills entire program
The Linux thread functionality was introduced by Konstantin, so he might have a better idea about that.

What I usually do is a graceful thread shutdown just by a flag. Like

int stop_thread = 0;

INT f(void *param)
{
  for (int x = 0; x < 100; x++) {
    sleep(1);
    if (stop_thread) {
      // clean up things here...
      return 0;
    }
  }
  return 0;
}

int main()
{
 printf("creating thread\n");
 midas_thread_t thr = ss_thread_create(f, NULL);
 sleep(2);
 printf("killing thread\n");
 stop_thread = 1;
 sleep(2);
 printf("success\n");
 return 0;
}


This way I have a chance to clean up things in the thread, which otherwise I would not be able to.
  843   13 Dec 2012 Konstantin OlchanskiBug Reportss_thread_kill() kills entire program
> Hi, I'm having some trouble getting ss_thread_kill() to work properly. It seems 
> to kill the entire program instead of just the thread.

You cannot kill a thread. It's not a well defined operation. Most OSes do have the 
technical possibility to kill threads, but if you use them, you will not like the 
results. For a taste of small trouble, if a thread is holding a lock and you kill 
it, who's job is it to release the lock?

The best you can do is to ask the thread to gracefully shutdown itself. (I.e. by 
using global variable flags).

P.S. I did not implement the ss_thread stuff, I do not know what ss_thread_kill() 
does, but I recommend that you do not use it.

P.P.S. Programming using threads is complicated, I recommend that you read at least 
some literature on the topic before using threads. At the least you must understand 
the common pitfalls and mistakes. At the least, you must know about deadlocks, 
livelocks, race conditions and semaphore priority inversions.

K.O.
  844   13 Dec 2012 Shaun MeadBug Reportss_thread_kill() kills entire program
> > Hi, I'm having some trouble getting ss_thread_kill() to work properly. It seems 
> > to kill the entire program instead of just the thread.
> 
> You cannot kill a thread. It's not a well defined operation. Most OSes do have the 
> technical possibility to kill threads, but if you use them, you will not like the 
> results. For a taste of small trouble, if a thread is holding a lock and you kill 
> it, who's job is it to release the lock?
> 
> The best you can do is to ask the thread to gracefully shutdown itself. (I.e. by 
> using global variable flags).
> 
> P.S. I did not implement the ss_thread stuff, I do not know what ss_thread_kill() 
> does, but I recommend that you do not use it.
> 
> P.P.S. Programming using threads is complicated, I recommend that you read at least 
> some literature on the topic before using threads. At the least you must understand 
> the common pitfalls and mistakes. At the least, you must know about deadlocks, 
> livelocks, race conditions and semaphore priority inversions.
> 
> K.O.

Yes, but unfortunately what I was attempting to do was use a library function that I
can't alter. It sometimes gets stuck and I wanted a way to kill it. Anyway I ended up
not doing this at all in c++; I was able to do what I needed in python.

Shaun
  845   14 Dec 2012 Robert CaspersonBug ReportMIDAS does not function correctly on F17
When building MIDAS on Fedora 17 64-bit, the default zlib 1.2.5 shared library
is linked to.  When recording data, the "/Logger/Channels/*/Statistics/Bytes
written" value does not get set correctly beyond the first few seconds of the
run.  Occasionally, it appears to not get set at all, and mlogger aborts the run.

Installing zlib 1.2.3 in static form to /usr/local/lib (the default location),
and changing the NEED_ZLIB section of the MIDAS Makefile to the following seems
to function as a workaround:

ifdef NEED_ZLIB
CFLAGS   += -DHAVE-ZLIB
LIBS     += /usr/local/lib/libz.a
endif

Several Fedora 17 libraries expect zlib 1.2.5 specifically, so it seems safest
to not replace the default zlib shared library.

Some extra details are that the VME CPU is an XVB602, and the most recent GE-IP
drivers are being used for VME communication.  Fedora 17 was chosen to avoid a
bug with the VGA output in Fedora 13-16.
  850   20 Dec 2012 Stefan RittBug ReportMIDAS does not function correctly on F17
If is not so easy to get out of zlib how many bytes have been written actually. I used an undocumented function, 
which breaks down on 64-bit systems.

I now rewrote the code in mlogger.cxx to use lseek() to "measure" actually the output file and set the values 
correctly. I tried on a few systems but am not 100% sure if it works everywhere. Can you please double check?

The fix is in SVN revision 5347.

/Stefan
  852   09 Jan 2013 wenliang liBug ReportOutputting ADC and TDC data into ROOT tree with the MIDAS SVN Revision:5347.
Dear Midas Experts

I am Wenliang Li, a graduate student from University of Regina. Our group have
encountered some difficulty on outputting ADC and TDC data into ROOT tree with
the MIDAS SVN Revision: 5347.

Our Linux Distribution: Scientific Linux release 6.0 (Carbon)
ROOT Version:           ROOT 5.28
gcc version:            g++ (GCC) 4.4.4 20100726 (Red Hat 4.4.4-13)
kernel version:         2.6.32-279.19.1.el6.i686


I am using the given example $MIDASSYS/examples/experiment to generate some
data, and the issue is that the analyzer refuses to turn on the  ADC0 and TDC0
back switches. 

If the ADC and TDC banks are switched off, the analyzer will successfully output
the histograms but not the ROOT tree, and the Trigger and Scaler root trees are
completely empty.

With the same example experiment: $MIDASSYS/examples/experiment, this issue does
not occur on MIDAS SVN Revision: 4309.


The output error messages in the analyzer window are shown if the ADC and TDC
banks are switched to 1:

*************************
Connect to experiment ...OK
Root server listening on port 9090...
Loading previous online histos from /home/billlee/experiment/test_exp/last.root
Running analyzer online. Stop with "!"
Error in <TTree::Branch>: The pointer specified for ADC0 is not of a class known
to ROOT and (null) is not a known class
ROOT TTree rebooked
Error in <TTree::Branch>: The pointer specified for ADC0 is not of a class known
to ROOT and (null) is not a known class
Error in <TTree::Branch>: The pointer specified for TDC0 is not of a class known
to ROOT and (null) is not a known class
ROOT TTree rebooked
***********************
***************************



If I analyze the data with TDC and ADC bank switched set to be 1 :
$ analyzer -i runXXXXX.mid -o runXXXXX.root

I get the following error messages:


************************************************************************
************************************************************************


Root server listening on port 9090...
Running analyzer offline. Stop with "!"
Error in <TTree::Branch>: The pointer specified for ADC0 is not of a class known
to ROOT and (null) is not a known class
Error in <TTree::Branch>: The pointer specified for TDC0 is not of a class known
to ROOT and (null) is not a known class
Set run number 1 in ODB
Load ODB from run 1...OK

 *** Break *** segmentation violation



===========================================================
There was a crash.
This is the entire stack trace of all threads:
===========================================================

Thread 2 (Thread 0x7f46c6853700 (LWP 10808)):
#0  0x0000003b63a0e84d in accept () from /lib64/libpthread.so.0
#1  0x0000003b64e370f4 in TUnixSystem::AcceptConnection(int) () from
/usr/lib64/root/libCore.so.5.28
#2  0x0000003b6647849c in TServerSocket::Accept(unsigned char) () from
/usr/lib64/root/libNet.so.5.28
#3  0x000000000040c50e in root_socket_server (arg=<value optimized out>) at
src/mana.c:5275
#4  0x00007f46c8dc513a in TThread::Function(void*) () from
/usr/lib64/root/libThread.so.5.28
#5  0x0000003b63a07851 in start_thread () from /lib64/libpthread.so.0
#6  0x0000003b62ee811d in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7f46c8b94720 (LWP 10800)):
#0  0x0000003b62eabfdd in waitpid () from /lib64/libc.so.6
#1  0x0000003b62e3e899 in do_system () from /lib64/libc.so.6
#2  0x0000003b62e3ebd0 in system () from /lib64/libc.so.6
#3  0x0000003b64e3da31 in TUnixSystem::StackTrace() () from
/usr/lib64/root/libCore.so.5.28
#4  0x0000003b64e3d3f3 in TUnixSystem::DispatchSignals(ESignals) () from
/usr/lib64/root/libCore.so.5.28
#5  <signal handler called>
#6  0x000000000041245f in TIter (file=<value optimized out>,
pevent=0x7f46c5281010, par=0x665180) at /usr/include/root/TCollection.h:148
#7  write_event_ttree (file=<value optimized out>, pevent=0x7f46c5281010,
par=0x665180) at src/mana.c:2872
#8  0x0000000000412a4c in process_event (par=0x665180, pevent=0x7f46c5281010) at
src/mana.c:3195
#9  0x0000000000412e42 in analyze_run (run_number=1,
input_file_name=0x7fff4d738340 "run00001.mid", output_file_name=<value optimized
out>) at src/mana.c:4178
#10 0x0000000000413372 in loop_runs_offline () at src/mana.c:4366
#11 0x0000000000413ba5 in main (argc=<value optimized out>, argv=<value
optimized out>) at src/mana.c:5579
===========================================================


The lines below might hint at the cause of the crash.
If they do not help you then please submit a bug report at
http://root.cern.ch/bugs. Please post the ENTIRE stack trace
from above as an attachment in addition to anything else
that might help us fixing this issue.
===========================================================
#6  0x000000000041245f in TIter (file=<value optimized out>,
pevent=0x7f46c5281010, par=0x665180) at /usr/include/root/TCollection.h:148
#7  write_event_ttree (file=<value optimized out>, pevent=0x7f46c5281010,
par=0x665180) at src/mana.c:2872
#8  0x0000000000412a4c in process_event (par=0x665180, pevent=0x7f46c5281010) at
src/mana.c:3195
#9  0x0000000000412e42 in analyze_run (run_number=1,
input_file_name=0x7fff4d738340 "run00001.mid", output_file_name=<value optimized
out>) at src/mana.c:4178
#10 0x0000000000413372 in loop_runs_offline () at src/mana.c:4366
#11 0x0000000000413ba5 in main (argc=<value optimized out>, argv=<value
optimized out>) at src/mana.c:5579
===========================================================


[midas.c:1973:,ERROR] cm_disconnect_experiment not called at end of program

**********************************************************************************************
**********************************************************************************************







I wonder if there is any program syntax change between MIDAS Version 4309 and
5347, and are there any simple working setup example which can output root tree
with the newest version of MIDAS?
 
In the end, I would like to thank the continuous effort from Triumf and PSI on
developing MIDAS, it is a pleasure to work with.

Many thanks
Bill 
  853   09 Jan 2013 Stefan RittBug ReportOutputting ADC and TDC data into ROOT tree with the MIDAS SVN Revision:5347.
Dear Bill,

the Midas analyzer "mana.c" is currently not maintained. At PSI we use the ROME framework (which might be too complicated for a 
small experiment) and at TRIUMF the ROOTANA framework is used:

http://ladd00.triumf.ca/~olchansk/rootana/

You might be better off switching to that one.

Best regards,
Stefan
  895   26 Jul 2013 Konstantin OlchanskiBug Reportabort on buffer overflow in odb.c::merge_records()
The odb.c function merge_records() has a fixed size array of 10000 bytes to handle the data and it 
aborts with an assert() if passed data bigger than that. It is called from db_create_record() which 
already allocates a data buffer of correct size for it's operations.
K.O.
ELOG V3.1.4-2e1708b5