Back Midas Rome Roody Rootana
  Midas DAQ System, Page 60 of 142  Not logged in ELOG logo
ID Date Author Topicdown Subject
  931   12 Nov 2013 Stefan RittForumInstallation problem
The warnings with the set but unused variables are real. While John O'Donnell proposed:

==========

somewhere I long the way I found an include file to help remove this kind of message.  try something like:

#include "use.h"
int foo () { return 3; }
int main () {
 { USED int i=foo(); }
 return 0;
}

with -Wall, and you will see the unused messages are gone.

==========

I would rather go and remove the unused variables to clean up the code a bit. Unfortunately my gcc version does 
not yet bark on that. So once I get a new version and I got plenty of spare time (....) I will consider removing all 
these variables.

/Stefan
  932   13 Nov 2013 Konstantin OlchanskiForumInstallation problem
> > I run into problems while trying to install Midas on Slackware 14.0.
> 
> Thank you for reporting this. We do not have any slackware computers so we cannot see these message usually.
> 
> 
> src/midas.c: In function 'cm_transition2':
> src/midas.c:3769:74: warning: variable 'error' set but not used [-Wunused-but-set-variable]
> 

got around to look at compile messages on ubuntu: in addition to "variable 'error' set but not used" we have these:

warning: ignoring return value of 'ssize_t write(int, const void*, size_t)'
warning: ignoring return value of 'ssize_t read(int, void*, size_t)'
warning: ignoring return value of 'int setuid(__uid_t)'
and a few more of similar

K.O.
  933   13 Nov 2013 Stefan RittForumInstallation problem
> got around to look at compile messages on ubuntu: in addition to "variable 'error' set but not used" we have these:
> 
> warning: ignoring return value of 'ssize_t write(int, const void*, size_t)'
> warning: ignoring return value of 'ssize_t read(int, void*, size_t)'
> warning: ignoring return value of 'int setuid(__uid_t)'
> and a few more of similar

Arghh, now it is getting even more picky. I can understand the "variable xyz set but not used" and I'm willing to remove all the variables. But checking the 
return value from every function? Well, if the disk gets full, our code will silently ignore this for write(), so maybe it's not a bad idea to add a few checks. Also 
for the read(), there could be some problem, where an explicit cm_msg() in case of an error would help.
  934   14 Nov 2013 Razvan Stefan GorneaForumInstallation problem

Hi, Thanks a lot for the response! Yes to search packages and list their content in Slackware it is pretty similar to your illustration. Slackware seems to use iODBC in which case it would link with -liodbc I guess.

root@lheppc83:~# slackpkg file-search sql.h

Looking for sql.h in package list. Please wait... DONE

The list below shows the packages that contains "sql\.h" file.

[ installed ] - libiodbc-3.52.7-x86_64-2

You can search specific packages using "slackpkg search package".

root@lheppc83:~# cat /var/log/packages/libiodbc-3.52.7-x86_64-2
PACKAGE NAME:     libiodbc-3.52.7-x86_64-2
COMPRESSED PACKAGE SIZE:     255.0K
UNCOMPRESSED PACKAGE SIZE:     1.0M
PACKAGE LOCATION: /var/log/mount/slackware64/l/libiodbc-3.52.7-x86_64-2.txz
PACKAGE DESCRIPTION:
libiodbc: libiodbc (Independent Open DataBase Connectivity)
libiodbc:
libiodbc: iODBC is the acronym for Independent Open DataBase Connectivity,
libiodbc: an Open Source platform independent implementation of both the ODBC
libiodbc: and X/Open specifications.  It allows for developing solutions
libiodbc: that are language, platform and database independent.
libiodbc:
libiodbc:
libiodbc:
libiodbc: Homepage: http://iodbc.org/
libiodbc:
FILE LIST:
./
usr/
usr/share/
usr/share/libiodbc/
usr/share/libiodbc/samples/
usr/share/libiodbc/samples/iodbctest.c
usr/share/libiodbc/samples/Makefile
usr/man/
usr/man/man1/
usr/man/man1/iodbc-config.1.gz
usr/man/man1/iodbctestw.1.gz
usr/man/man1/iodbctest.1.gz
usr/man/man1/iodbcadm-gtk.1.gz
usr/bin/
usr/bin/iodbctest
usr/bin/iodbcadm-gtk
usr/bin/iodbctestw
usr/bin/iodbc-config
usr/include/
usr/include/iodbcinst.h
usr/include/sqlext.h
usr/include/iodbcunix.h
usr/include/isqltypes.h
usr/include/sql.h
usr/include/iodbcext.h
usr/include/isql.h
usr/include/odbcinst.h
usr/include/isqlext.h
usr/include/sqlucode.h
usr/include/sqltypes.h
usr/lib64/
usr/lib64/libiodbc.la
usr/lib64/libdrvproxy.so.2.1.19
usr/lib64/libiodbcinst.la
usr/lib64/libiodbcadm.so.2.1.19
usr/lib64/libiodbcinst.so.2.1.19
usr/lib64/libiodbcadm.la
usr/lib64/pkgconfig/
usr/lib64/pkgconfig/libiodbc.pc
usr/lib64/libiodbc.so.2.1.19
usr/lib64/libdrvproxy.la
usr/doc/
usr/doc/libiodbc-3.52.7/
usr/doc/libiodbc-3.52.7/ChangeLog
usr/doc/libiodbc-3.52.7/README
usr/doc/libiodbc-3.52.7/COPYING
usr/doc/libiodbc-3.52.7/AUTHORS
usr/doc/libiodbc-3.52.7/INSTALL
install/
install/doinst.sh
install/slack-desc

  935   14 Nov 2013 Konstantin OlchanskiForumInstallation problem
# slackpkg file-search sql.h
[ installed ] - libiodbc-3.52.7-x86_64-2
...
# slackpkg search package
...
# cat /var/log/packages/libiodbc-3.52.7-x86_64-2
            usr/include/sql.h
...
            usr/lib64/libiodbc.so.2.1.19
...

Thanks, I am saving the slackpkg commands for future reference. Looks like the immediate problem is 
with the library name: libiodbc instead of libodbc. But the header file sql.h is the same.

I am not sure if it is worth making a generic solution for this: on MacOS, all ODBC functions are now 
obsoleted, to be removed, and since we are stanardized on MySQL anyway, so I think I will rewrite the SQL 
history driver to use the MySQL interface directly. Then all this ODBC extra layering will go away.

K.O.
  936   14 Nov 2013 Konstantin OlchanskiForumInstallation problem
> #include "use.h"
>  { USED int i=foo(); }

Sounds nifty, but google does not find use.h.

As for unused variables, some can be removed, others not so much, there is some code in there:

int i = blah...
#if 0
if (i=42) printf("wow, we got a 42!\n");
#endif
and
if (0) printf("debug: i=%d\n", i);

(difference is if you remove "i" or otherwise break the disabled debug code, "#if 0" will complain the next time you need that debugging code, "if (0)" will 
complain right away).

Some of this disabled debug code I would rather not remove - so much debug scaffolding I have added, removed, added again, removed again, all in the same 
places that I cannot be bothered with removing it anymore. I "#if 0" it and it stays there until I need it next time. But of course now gcc complains about it.

K.O.
  956   11 Feb 2014 Randolf PohlForumHuge events (>10MB) every second or so
I'm looking into using MIDAS for an experiment that creates one large event
(20MB or more) every second.

Q1: It looks like I should use EQ_FRAGMENTED. Has this feature been in use
recently? Is it known to work/not work?

More specifically, the computer should initiate a 1 second data taking, start to
such the data out of the electronics (which may take a while), change some
experimental parameters, and start over. 

Q2: What's the best way to do this? EQ_PERIODIC? 
I cannot guarantee that the time required to read the hardware has an upper bound.
In a standalone-prog I would simply use a big loop and let the machine execute
it as fast as it can: 1.1s, 1.5s, 1.1s, 1.3s, 2.5s, ..... depending on the HW
deadtimes.
Will this work with EQ_PERIODIC?

(Sorry for these maybe stupid questions, but I have so far only used MIDAS for
externally generated events, with <32kB event size).


Thanks a lot,

Randolf
  957   11 Feb 2014 Stefan RittForumHuge events (>10MB) every second or so
> I'm looking into using MIDAS for an experiment that creates one large event
> (20MB or more) every second.
> 
> Q1: It looks like I should use EQ_FRAGMENTED. Has this feature been in use
> recently? Is it known to work/not work?
> 
> More specifically, the computer should initiate a 1 second data taking, start to
> such the data out of the electronics (which may take a while), change some
> experimental parameters, and start over. 
> 
> Q2: What's the best way to do this? EQ_PERIODIC? 
> I cannot guarantee that the time required to read the hardware has an upper bound.
> In a standalone-prog I would simply use a big loop and let the machine execute
> it as fast as it can: 1.1s, 1.5s, 1.1s, 1.3s, 2.5s, ..... depending on the HW
> deadtimes.
> Will this work with EQ_PERIODIC?
> 
> (Sorry for these maybe stupid questions, but I have so far only used MIDAS for
> externally generated events, with <32kB event size).
> 
> 
> Thanks a lot,
> 
> Randolf

Hi Randolf,

EQ_FRAGMENTED is kind of historically, when computers had a few MB of memory and you have to play special tricks to get large data buffers through. Today I 
would just use EQ_PERIODIC and increase the midas maximal event size to your needs. For details look here:

https://midas.triumf.ca/MidasWiki/index.php/Event_Buffer

The front-end scheduler is asynchronous, which means that your readout is called when the given period (1 second) is elapsed. If the readout takes longer 
than 1s, the schedule will (hopefully) call your readout immediately after the event has been sent. So you get automatically your maximal data rate. At MEG, we 
use 2 MB events with 10 Hz, so a 20 MB/sec data rate should not be a problem on decent computers.

Best,
Stefan
  962   18 Feb 2014 Konstantin OlchanskiForumHuge events (>10MB) every second or so
> I'm looking into using MIDAS for an experiment that creates one large event
> (20MB or more) every second.

Hi, there - 20 Mbyte event at 1/sec is not so large these days. (Well, depending on your hardware).

Using typical 1-2 year old PC hardware, 20 M/sec to local disk should work right away. Sending data from a 
remote front end (through the mserver), or writing to a remote disk (NFS, etc) - will of course requre a GigE 
network connection.

By default, MIDAS is configured for using about 1-2 Mbyte events, so for your case, you will need to:

- increase the event size limits in your frontend,
- increase /Experiment/MAX_EVENT_SIZE in ODB
- increase the size of the SYSTEM event buffer (/Experiment/Buffer sizes/SYSTEM in ODB)

I generally recommend sizing the SYSTEM event buffer to hold a few seconds worth of data (ot 
accommodate any delays in writing to local disk - competing  reads, internal delays of the disks, etc).

So for 20 M/s, the SYSTEM buffer size should be about 40-60 Mbytes.

For your case, you also want to buffer 3-5-10 events, so the SYSTEM buffer size would be between 100 and 
200 MBytes.

Assuming you have between 8-16-32 GBytes of RAM, this should not be a problem.

One the other hand, if you are running on a low-power ("green") ARM system with 1 Gbyte of RAM and a 
1GHz CPU, you should be able to handle the data rate of 20 Mbytes/sec, as long as your network and 
storage can handle it - I see GigE ethernet running at about 30-40 Mbytes/sec, so you should be okey,
but local storage to SD flash is only about 10 Mbytes/sec - too slow. You can try USB-attached HDD or SSD, 
this should run at up to 30-40 Mbytes/sec. I would expect no problems with this rate from MIDAS, as long 
as you can fit into your 1 GByte of RAM - obviously your SYSTEM buffer will have to be a little smaller than 
on a full-featured PC.

More information on MIDAS event size limits is here (as already reported by Stefan)
https://midas.triumf.ca/MidasWiki/index.php/Event_Buffer

Let us know how it works out for you.

K.O.
  968   23 Feb 2014 William PageForumdb_check_record() for verifying structure of ODB subtree
Hi,

I have been trying to use db_check_record() in order to verify that a subtree in the ODB has the correct 
variables, variable order, and overall size. I'm going off the documentation 
(https://midas.psi.ch/htmldoc/group__odbfunctionc.html) and use a string to compare against the ODB 
structure.  Since the string format is not specified for db_check_record(), I'm formatting my string 
according to the db_create_record() example.

Instead of db_check_record() checking the entire ODB subtree against all the variables represented in the 
string, I'm finding that only the first variable is checked.  The later variables in the string can be 
misspelled, out of order, or inexistent, and db_check_record() will still return 1.

Am I using db_check_record incorrectly?  

Thank you for any help with this issue.


I also believe that some of the documentation for db_check_record is outdated.  For example, init_string 
is referenced in the documentation but isn't part of the function definition.
  975   01 Mar 2014 Randolf PohlForumHuge events (>10MB) every second or so
Works, and here is how I did it. The attached example is based on the standard MIDAS
example in "src/midas/examples/experiment". 

My somewhat unsorted notes, haven't really tweaked the numbers. But it WORKS.

(1) mlogger writes "last.xml" (hard-coded!) which takes an awful amount of time
    as it writes the complete ODB containing the 10MB bank!
    just outcomment 
       // odb_save("last.xml");
    in mlogger.cxx, function 
    INT tr_start(INT run_number, char *error)
    (line ~3870 in mlogger rev. 5377, .cxx-file included)

(2) frontend.c:
     * the most important declarations are

/* BIG_DATA_BYTES is the data in 1 bank
   BIG_EVENT_SIZE is the event size. It's a bit larger than the bank size
                  because MIDAS needs to add some header bytes, I think
 */

#define BIG_DATA_BYTES  (10*1024*1024)   // 10 MB
#define BIG_EVENT_SIZE  (BIG_DATA_BYTES + 100)

/* maximum event size produced by this frontend */
INT max_event_size = BIG_EVENT_SIZE;

/* maximum event size for fragmented events (EQ_FRAGMENTED) */
INT max_event_size_frag = 5 * BIG_EVENT_SIZE;

/* buffer size to hold 10 events */
INT event_buffer_size = 10 * BIG_EVENT_SIZE;


     * bk_init() can hold at most 32kByte size events! Use bk_init32() instead.

     * complete frontend.c is attached

(3) in an xterm do
    # . setup.sh
    # odbedit -s 41943040
          (first invocation of odbedit must create large enough odb,
           otherwise you'll get "odb full" errors)
(4) odbedit> load big.odb      
    (attached). Essentials are:

    /Experiment/MAX_EVENT_SIZE = 20971520
    /Experiment/Buffer sizes/SYSTEM = 41943040   <- at least 2 events!

    To avoid excessive latecies when starting/stopping a run, do
    /Logger/ODB Dump = no 
    /Logger/Channels/0/Settings/ODB Dump = no 
     
    and create an Equipment Tree to make the mlogger happy

(5) a few more xterms, always ". setup.sh":
    # mlogger_patched (see (1))
    # ./frontend  (attched)

(6) in your odbedit (4) say "start". You should fill your disk rather quickly.
Attachment 1: big_event.tgz
  976   01 Mar 2014 Stefan RittForumHuge events (>10MB) every second or so
> Works, and here is how I did it. The attached example is based on the standard MIDAS
> example in "src/midas/examples/experiment". 

If you have such huge events, it does not make sense to put them into the ODB. The size needs to be increased (as 
you realized correctly) and the run stop takes long if you write last.xml. So just remove the RO_ODB flag in the 
frontend program and you won't have these problems.

/Stefan
  978   11 Mar 2014 Andreas SuterForummlogger problem
I stumbled over a problem which I cannot pin point and would appreciate suggestions.

I set up an experiment, and all of a sudden I noticed the following behaviour.

I can start any number of frontends without any problems as long as mlogger is NOT running.
I can also start mlogger without any problems. However, as soon as I started the mlogger, I cannot start anything else any more (including odbedit). I get the following assertion:
16:07:06 [Logger,INFO] Program Logger on host lem00 started
[local:nemu:S]/>q
[nemu@lem00 2014]$ odbedit -e nemu
odbedit: src/odb.c:753: db_update_open_record: Assertion `xkey->notify_count == pkey->notify_count' failed.
Aborted
This is even happening if I stop all frontends, start only the mlogger and afterwards try to start odbedit.

I tried to see if this is a generic feature on a test experiment, but there I cannot reproduce it. It seems that there is either something wrong with the ODB, something wrong with hotlinks, ..., I don't know.

I would appreciated suggestions how pin point the issue.
  979   11 Mar 2014 Stefan RittForummlogger problem

Andreas Suter wrote:
I stumbled over a problem which I cannot pin point and would appreciate suggestions.

I set up an experiment, and all of a sudden I noticed the following behaviour.

I can start any number of frontends without any problems as long as mlogger is NOT running.
I can also start mlogger without any problems. However, as soon as I started the mlogger, I cannot start anything else any more (including odbedit). I get the following assertion:
16:07:06 [Logger,INFO] Program Logger on host lem00 started
[local:nemu:S]/>q
[nemu@lem00 2014]$ odbedit -e nemu
odbedit: src/odb.c:753: db_update_open_record: Assertion `xkey->notify_count == pkey->notify_count' failed.
Aborted
This is even happening if I stop all frontends, start only the mlogger and afterwards try to start odbedit.

I tried to see if this is a generic feature on a test experiment, but there I cannot reproduce it. It seems that there is either something wrong with the ODB, something wrong with hotlinks, ..., I don't know.

I would appreciated suggestions how pin point the issue.


K.O. put that in: https://bitbucket.org/tmidas/midas/commits/9d7b7c83b275a2bd3c846c4f265ff7f5d53f3426

He should have a look at it.

Have you tried to rebuild your ODB from scratch? (Save in XML, then delete .ODB.SHM, then load again form XML)?

/Stefan
  980   11 Mar 2014 Andreas SuterForummlogger problem

Stefan Ritt wrote:

Andreas Suter wrote:
I stumbled over a problem which I cannot pin point and would appreciate suggestions.

I set up an experiment, and all of a sudden I noticed the following behaviour.

I can start any number of frontends without any problems as long as mlogger is NOT running.
I can also start mlogger without any problems. However, as soon as I started the mlogger, I cannot start anything else any more (including odbedit). I get the following assertion:
16:07:06 [Logger,INFO] Program Logger on host lem00 started
[local:nemu:S]/>q
[nemu@lem00 2014]$ odbedit -e nemu
odbedit: src/odb.c:753: db_update_open_record: Assertion `xkey->notify_count == pkey->notify_count' failed.
Aborted
This is even happening if I stop all frontends, start only the mlogger and afterwards try to start odbedit.

I tried to see if this is a generic feature on a test experiment, but there I cannot reproduce it. It seems that there is either something wrong with the ODB, something wrong with hotlinks, ..., I don't know.

I would appreciated suggestions how pin point the issue.


K.O. put that in: https://bitbucket.org/tmidas/midas/commits/9d7b7c83b275a2bd3c846c4f265ff7f5d53f3426

He should have a look at it.

Have you tried to rebuild your ODB from scratch? (Save in XML, then delete .ODB.SHM, then load again form XML)?

/Stefan

Yes, I could recover the ODB by falling back to a previous dump. Still, I would like to know what is the exact meaning of the above assertion. It might help to understand what are the likely cause which results in the assertion.

/Andreas
  984   14 Mar 2014 Konstantin OlchanskiForummlogger problem
> I stumbled over a problem which I cannot pin point and would appreciate suggestions.
> 
> [nemu@lem00 2014]$ odbedit -e nemu
> odbedit: src/odb.c:753: db_update_open_record: Assertion `xkey->notify_count == pkey-
>notify_count' failed.
> Aborted

I think this is a real bug in MIDAS - I will have to take a look to figure out where this is coming from. At the 
least, if I cannot replace the assert with some corrective action, I may replace it with an error message.

I am glad you could recover by reloading odb.

K.O.
  985   17 Mar 2014 Zhi LiForum[need help] simple example frontend for CAEN VX1721
Dear guys,

I’m Zhi Li from China, and I’m now working on my graduation project, which now
basically gets stuck in the part of preparing the frontend for my FADC (CAEN
VX1721) using Midas.

Now the current set-up includes a VME crate, a CAEN v2718 (Optical Bridge and
Controller) and a CAEN VX1721(8ch 8bit 500MS/s Waveform digitizer). The hardware
set-up has been finished and I could capture the analog waveform using CAEN
software(wavedump). 

Could anyone please tell me what are the basic things to do for using MIDAS?
I’ve installed MIDAS in PC and it works well for CAMAC, but do I need any extra
hardware module on using VME crate? Also, how to download
Universe-II VME driver?

Thanks,
Li
  987   17 Mar 2014 Pierre-Andre AmaudruzForum[need help] simple example frontend for CAEN VX1721
Hi Li,

You mention that you've got the wavedump working. It suggests that you have a A3818 
interface, can you confirm that?

If so, you can make a Midas frontend using the CAEN libraries to access your VX1721. I can provide you with a frontend example used for the V1720 or V1740. The 
modifications for the VX1721 shouldn't be too hard as most of the CAEN digitizers 
are fortunately based on a similar configuration mechanism.
If you have a Midas CAMAC frontend, the trick would be to replace the CAMAC calls by 
the appropriate CAENComm_xxx() for the equivalent functionality.

Can you remind me what hardware do you have in your lab for acquisition?
CAMAC controller, VME controller etc.

Cheers, PAA

> Dear guys,
> 
> I’m Zhi Li from China, and I’m now working on my graduation project, which now
> basically gets stuck in the part of preparing the frontend for my FADC (CAEN
> VX1721) using Midas.
> 
> Now the current set-up includes a VME crate, a CAEN v2718 (Optical Bridge and
> Controller) and a CAEN VX1721(8ch 8bit 500MS/s Waveform digitizer). The hardware
> set-up has been finished and I could capture the analog waveform using CAEN
> software(wavedump). 
> 
> Could anyone please tell me what are the basic things to do for using MIDAS?
> I’ve installed MIDAS in PC and it works well for CAMAC, but do I need any extra
> hardware module on using VME crate? Also, how to download
> Universe-II VME driver?
> 
> Thanks,
> Li
  989   17 Mar 2014 Zhi LiForum[need help] simple example frontend for CAEN VX1721
Hi Pierre,

Thanks for your instructions. Before I run the wavedump software, I need to load a driver file for A2818, thus I think I've got this interface of A2818.

I would be grateful to have a look at the frontend example used for v1720 (closer to v1721 I suppose), would you be so kind to offer me the Makefile as well? I
really want to have a compilable/executable DAQ frontend for vme modules, and know better how to link to CAEN library in the Makefile.

About hardware currently used in the vme crate(A2818), there is a VME controller(V2718, CONET VME Bridge), and a FADC(VX1721 waveform digitizer). I'm now preparing
this DAQ system to compare relative quantum efficiency, timing resolution, 1 pe distribution of photomultipliers, also measure decay time of cosmic muons, and
electron spectrum. Humbly, I want to know your opinion on whether I need additional hardware to finish these experiments.

Thanks,
Li

> Hi Li,
> 
> You mention that you've got the wavedump working. It suggests that you have a A3818 
> interface, can you confirm that?
> 
> If so, you can make a Midas frontend using the CAEN libraries to access your VX1721. I can provide you with a frontend example used for the V1720 or V1740. The 
> modifications for the VX1721 shouldn't be too hard as most of the CAEN digitizers 
> are fortunately based on a similar configuration mechanism.
> If you have a Midas CAMAC frontend, the trick would be to replace the CAMAC calls by 
> the appropriate CAENComm_xxx() for the equivalent functionality.
> 
> Can you remind me what hardware do you have in your lab for acquisition?
> CAMAC controller, VME controller etc.
> 
> Cheers, PAA
> 
> > Dear guys,
> > 
> > I’m Zhi Li from China, and I’m now working on my graduation project, which now
> > basically gets stuck in the part of preparing the frontend for my FADC (CAEN
> > VX1721) using Midas.
> > 
> > Now the current set-up includes a VME crate, a CAEN v2718 (Optical Bridge and
> > Controller) and a CAEN VX1721(8ch 8bit 500MS/s Waveform digitizer). The hardware
> > set-up has been finished and I could capture the analog waveform using CAEN
> > software(wavedump). 
> > 
> > Could anyone please tell me what are the basic things to do for using MIDAS?
> > I’ve installed MIDAS in PC and it works well for CAMAC, but do I need any extra
> > hardware module on using VME crate? Also, how to download
> > Universe-II VME driver?
> > 
> > Thanks,
> > Li
  990   15 Apr 2014 Wes GohnForumC++11 error
I am having some trouble creating a frontend that interacts with some libraries that use C++11. 

The flag I added to my MIDAS Makefile to get the C++11 part of the code to work is -std=c++0x. This 
causes an error in the equipment description in the frontend code.

The error I get is:

frontend.cpp:149: error: narrowing conversion of ‘-0x00000000000000001’ from ‘int’ to ‘WORD’ inside { 
}

This corresponds to the following in the MIDAS frontend code.

EQUIPMENT equipment[] = { 
  {
   "MWPC",                           /* equipment name */
   {1, TRIGGER_ALL,                  /* event ID, trigger mask */
     "BUF2",                        /* event buffer */
     EQ_POLLED | EQ_EB,              /* equipment type */
     LAM_SOURCE(0, 0xFFFFFF),        /* event source crate 0, all stations */
     "MIDAS",                        /* format */
     TRUE,                           /* enabled */
     RO_RUNNING,                     /* read only when running */
     1,                              /* poll for 1ms */
     0,                              /* stop run after this event limit */
     0,                              /* number of sub events */
     0,                              /* don't log history */
     "", "", "",},
    read_trigger_event,              /* readout routine */
  },

   {""}
};  <- this is line 149
#ifdef __cplusplus
}
#endif

Do you know a way to make this compatible with C++11?

Thanks!
ELOG V3.1.4-2e1708b5