Back Midas Rome Roody Rootana
  Midas DAQ System, Page 25 of 46  Not logged in ELOG logo
Entry  15 Dec 2014, Amy Roberts, Forum, lock ODB variables within sequencer? 
Hello,

I'm wondering if it would be possible to add the ability to lock ODB variables as 
a sequencer command.

The "Lock when running" directory in the ODB /Experiment tree seems to apply only 
during a run - I'd like a way to lock a variable outside a run.

Is this possible within the sequencer?  Or have I overlooked existing 
functionality?

Thanks!

Amy
Entry  13 Nov 2014, Tim Gorringe, Forum, using single frontend with multiple "EQ_POLLED" equipments to generate different data streams  

We have a MIDAS frontend that provides both the readout of raw events 
and the processing of raw events into several distinct derived datasets. 
For one type of derived dataset there is a derived event for 
each raw event. For other types of derived datasets there's a
derived event for every N raw events. We'd like to have the different 
derived event types sent to different buffers / shared memory segments 
and stored in different midas files.

I was thinking of defining a separate equipment for each type of 
derived data. Each equipment would have a different buffer name so
the data would go to different buffers and thereafter to different 
midas files. I was also thinking of defining each equipment as
a "polled event" but with a unique "source ID". I believe the user 
poll_event() function is passed the "source ID" of the equipment
type and therefore could return success/fail based on whether or
not the particular derived event with that source ID is available 
for readout. Each equipment for each derived dataset would have 
a unique readout routine to create and fill the midas databanks 
for that derived event type.

The above scheme is similar to the midas documentation example
of a frontend with a trigger equipment and a scaler equipment
However, the scaler / trigger example uses two different event
types - EQ_POLLED for trigger and EQ_PERIODIC for scaler. I'd like
to use several EQ_POLLED equipments that are distinguished by
their source ID's

Is this a sensible scheme for make different data streams of
different derived event types from a single frontend? Has anyone
tried something similar? 
    Reply  13 Nov 2014, Pierre-Andre Amaudruz, Forum, using single frontend with multiple "EQ_POLLED" equipments to generate different data streams  
Hi Tim,

Multiple Polling equipment are possible, but you may have to balance the polling 
time based on the expected trigger rate for each equipment due to the 
acquisition/processing time of each equipment.

But instead of using the event buffer destination for the dataset selection, you 
could use the trigger mask and the event ID modified at the user code level from 
a single equipment.

Using the macros such as TRIGGER_MASK(pevent), EVENT_ID(pevent) you can modify 
on the fly their assignment. All go through the SYSTEM buffer as usual.

You use the data logger capability of multiple channels to steer the data in 
different files. 
Each logger channel requires a definition of the type of event that you want to 
record. EventID, TriggerMask can in this case be used to select a particular 
type of event.
I used this option and if I recall correctly, the trigger mask is the one you 
want to base your selection upon. This gives you up to 16 channels (bitwise). 
the eventID should remain -1, but it is a valid information from the FEs.

Cheers, PAA


> 
> 
> We have a MIDAS frontend that provides both the readout of raw events 
> and the processing of raw events into several distinct derived datasets. 
> For one type of derived dataset there is a derived event for 
> each raw event. For other types of derived datasets there's a
> derived event for every N raw events. We'd like to have the different 
> derived event types sent to different buffers / shared memory segments 
> and stored in different midas files.
> 
> I was thinking of defining a separate equipment for each type of 
> derived data. Each equipment would have a different buffer name so
> the data would go to different buffers and thereafter to different 
> midas files. I was also thinking of defining each equipment as
> a "polled event" but with a unique "source ID". I believe the user 
> poll_event() function is passed the "source ID" of the equipment
> type and therefore could return success/fail based on whether or
> not the particular derived event with that source ID is available 
> for readout. Each equipment for each derived dataset would have 
> a unique readout routine to create and fill the midas databanks 
> for that derived event type.
> 
> The above scheme is similar to the midas documentation example
> of a frontend with a trigger equipment and a scaler equipment
> However, the scaler / trigger example uses two different event
> types - EQ_POLLED for trigger and EQ_PERIODIC for scaler. I'd like
> to use several EQ_POLLED equipments that are distinguished by
> their source ID's
> 
> Is this a sensible scheme for make different data streams of
> different derived event types from a single frontend? Has anyone
> tried something similar? 
Entry  12 Nov 2014, Robert Pattie, Forum, struct mismatch logger_channels.pdf
Hi all,
  I've started receiving the following error that I can't track down.  Does
anyone have a suggestion for where to start looking for the cause of this?

[Analyzer,ERROR] [odb.c:9460:db_open_record,ERROR] struct size mismatch for "/"
(expected size: 576, size in ODB: 0)

This error prevents me from running two runs in a row.  I have to close the DAQ
and restart to take multiple runs.  Also it prevents me from running the analyzer
in offline mode. 

I also noticed that several for the ODB directories no longer have the same html
format when viewed through the browser.  I've attached a screen print of the
"/Logger/Channels" page.

Thanks,
Robert
Entry  19 May 2014, Razvan Stefan Gornea, Forum, Weird problem on new installation 

Hello,

 
I have a very weird problem on a new installation of Midas running also new code. My old code was written with the CAEN VME library and run from an older PC with a CAEN interface. I now moved to a GEFanuc V7769 with a Tundra II bridge and the frontend is using the UniverseII VME library. I'm mentioning this just to point out that the code is new, not just the Midas installation. I don't think the change of VME library has anything to do with my problem.
 
Anyway, except the VME access, the frontend code is the same. Especially, all accesses to the online data base are identical. The main problem I'm facing is that I can not create the record with the frontend configuration data in the ODB. Here is a bit of code from my frontend_init()
 
rstat = db_create_record(hDB, 0, OWNER_EQUIPMENT, v1740_conf_str);
if (rstat != DB_SUCCESS) {
  cm_msg(MERROR, frontend_name, "could not create record for the V1740 DAQ configuration");
  cm_msg(MERROR, frontend_name, "call to db_create_record returned %d", rstat);
 
  return rstat;
}
 
and these are some messages from the Midas log
 
Mon May 19 18:23:42 2014 [Charge Frontend,INFO] Program Charge Frontend on host lheppc78 started
Mon May 19 18:23:42 2014 [Charge Frontend,ERROR] [frontend.c:153:Charge Frontend,ERROR] could not create record for the V1740 DAQ configuration
Mon May 19 18:23:42 2014 [Charge Frontend,ERROR] [frontend.c:154:Charge Frontend,ERROR] call to db_create_record returned 320
Mon May 19 18:23:42 2014 [Charge Frontend,INFO] Program Charge Frontend on host lheppc78 stopped
 
The error 320 is essentially saying that there is a client which has already opened this key! But I'm pretty sure there is not!
 
I have checked the path OWNER_EQUIPMENT. I also used odbedit to make folders and variables in /Equipment/. I erased the hidden files in my data folder. I checked for experiment definition. I even follow an example in the docs how to clean up corrupted shared memory, erased /dev/shm.*, etc. I essentially checked item by item my old installation with the new one and everything seems the same.
 
So at this point I'm very puzzled!!! I don't know what to look for further. Do you have any idea what I could check for!?!
 
There were a few things that when wrong while getting ready the new installation but I solved them. So just to mention in case it is important.
 
A) I had difficulties to start the system on the new machine. Some hidden files in the data folder: .ODB.SHM .SYSTEM.SHM etc. where somehow created with root:root ownership and then mlogger et co. could not start! Also, some files in /dev/shm/ where created the same and it gave some problems. I solved these simply by chown all to user:group But I don't understand why this happened and I don't remember having to do that on my old system.
 
B) I use Slackware and I had this problem that instead of having the ODBC library I had only the iODBC and so I made the switch to that to be able to compile (which by the way seemed fine). I have no idea if this could somehow be related to my current problem.
 
Thanks a lot for your help!
 
Cheers,
Razvan Gornea
 
 
    Reply  22 May 2014, Razvan Stefan Gornea, Forum, Weird problem on new installation 

I reduced the parameter space a little bit and I think the problem is somewhere in the framework. What I did is first to make a short program which accesses the ODB and I was able to access the Settings record. Everything seems to work fine, if parts or all of the record is missing then it is created automatically, etc.

Then I reduced my frontend to essentially a few lines in the frontend_init() which are identical to what I did in the small test program. Still when running the frontend it doesn't work and db_create_record() returns the error DB_OPEN_RECORD. I have tried something crazy, i.e. to disconnect the experiment and then reconnect in the frontend_init() and I was able to write the Settings record in the ODB! I have checked in mfe.c and odb.c and I think when my frontend_init() gets called, the record is already open!

Does anybody skilled with Midas know what I can do to solve or investigate this problem further?

This is the small program that can successfully access the ODB and create the Settings record.

// test program
#include <stdio.h>
#include <stdlib.h>
#include "midas.h"
#include "v1740_daq_settings.h"

V1740_DAQ_CONF_STR(v1740_conf_str);

int main(void)
{
  int status;
  HNDLE hDB;
  char temp_name[NAME_LENGTH];
  
  status = cm_connect_experiment("", "", "test", NULL);
  if (status != CM_SUCCESS) {
    printf("Oups could not connect to the experiment\n");
    
    return 1;
  }
  cm_get_experiment_database(&hDB, NULL);
  status = db_create_record(hDB, 0, OWNER_EQUIPMENT, v1740_conf_str);
  if (status != DB_SUCCESS) {
    printf("Oups failed to create DB record!\nCall to db_create_record() has returned %d", status);
    cm_disconnect_experiment();
    
    return 2;
  }
  
  cm_get_experiment_name(temp_name, NAME_LENGTH-1);
  cm_msg(MINFO, "test", "experiment name is <|%s|>", temp_name);
  printf("The test is successful!\nNo errors occurred!\n");
  cm_disconnect_experiment();
  
  return 0;
}

This is the frontend_init() part (there is nothing left in the frontend anyway). This is exactly the same code like the previous one, but it won't work! So I concluded that the problem is already present when frontend_init() gets call in mfe.c

#include <stdio.h>
#include <stdlib.h>
#include "midas.h"
#include "v1740_daq_settings.h"


/* make frontend functions callable from the C framework */
#ifdef __cplusplus
extern "C" {
#endif

/*-- Globals -------------------------------------------------------*/

/* The frontend name (client name) as seen by other MIDAS clients   */
char *frontend_name = "Charge Frontend";
/* The frontend file name, don't change it */
char *frontend_file_name = __FILE__;

/* frontend_loop is called periodically if this variable is TRUE    */
BOOL frontend_call_loop = FALSE;

/* a frontend status page is displayed with this frequency in ms */
INT display_period = 2100;

/* maximum event size produced by this frontend */
INT max_event_size = 2304 * 1024 + 128;

/* maximum event size for fragmented events (EQ_FRAGMENTED) */
INT max_event_size_frag = 5 * 1024 * 1024;

/* buffer size to hold events */
INT event_buffer_size = 64 * (2304 * 1024 + 128);

V1740_DAQ_CONF_STR(v1740_conf_str); // string representation for the database record for the configuration of the v1740 DAQ board 
HNDLE hDB = 0; // handle on database
HNDLE hConf = 0; // handle for the configuration section


/*-- Function declarations -----------------------------------------*/

INT frontend_init();
// ... etc ...

/*-- Equipment list ------------------------------------------------*/

#undef USE_INT

EQUIPMENT equipment[] = {
  
  {"CAEN_V1740",                // equipment name
    {1, 0,                      // event ID, trigger mask
    "SYSTEM",                   // event buffer
    EQ_POLLED | EQ_MANUAL_TRIG, // equipment type
    LAM_SOURCE(0, 0xFFFFFF),    // event source crate 0, all stations
    "MIDAS",                    // data format
    TRUE,                       // equipment enabled
    RO_RUNNING,                 // read only when running and update ODB
    500,                        // poll for 500 ms
    0,                          // stop run after this event limit
    0,                          // number of sub events
    0,                          // don't log history
    "", "", "",
    },
    read_CAEN_V1740_event,         // readout routine
  },

  {""}
};

#ifdef __cplusplus
}
#endif


INT frontend_init()
{
  INT rstat = SUCCESS; // temp variable for Midas func. return codes
  char temp_name[NAME_LENGTH];
  
//   cm_disconnect_experiment();
//   cm_msg(MINFO, frontend_name, " *** DISCONNECTED FROM THE EXPERIMENT *** ");
//   rstat = cm_connect_experiment("", "", frontend_name, NULL);
//   if (rstat != CM_SUCCESS) {
//     cm_msg(MERROR, frontend_name, "Oups could not connect to the experiment");
//     
//     return rstat;
//   }
//   cm_msg(MINFO, frontend_name, " *** CONNECTED AGAIN TO THE EXPERIMENT *** ");
  
  // get handle on database
  cm_get_experiment_database(&hDB, NULL);
  // create or check for configuration data structure
  rstat = db_create_record(hDB, 0, OWNER_EQUIPMENT, v1740_conf_str);
  if (rstat != DB_SUCCESS) {
    cm_msg(MERROR, frontend_name, "could not create record for the V1740 DAQ configuration");
    cm_msg(MERROR, frontend_name, "call to db_create_record returned %d", rstat);
    cm_get_experiment_name(temp_name, NAME_LENGTH-1);
    cm_msg(MERROR, frontend_name, "experiment name is <|%s|>", temp_name);
  
    return rstat;
  }
  cm_msg(MINFO, frontend_name, " *** SUCCESSFULLY CREATED THE RECORD IN ODB *** ");

  
  return 11;
//  return SUCCESS;
}
 
       Reply  27 May 2014, Razvan Stefan Gornea, Forum, Weird problem on new installation 
I investigated further this problem and this is what I think is going on. Essentially when creating the Settings key, the framework scans all the subkeys of the parent key to see if any are open. If that's the case it then returns an error on opening the Settings key! Here are the details:

db_create_record() : check if global open_count neq. 0
-> db_scan_tree_link() : call back to check_open_keys()
-> check_open_keys() : implicitly changes global open_count neq. 0 when key->notify_count > 0
-> db_scan_tree_link() : if key.type eq. TID_KEY then get list subkeys
-> for all subkey : call recursively db_scan_tree_link()

In my case I have the following structure in ODB right before the framework calls frontend_init():
/Equipment/CAEN_V1740 [CLOSE]
/Equipment/CAEN_V1740/Variables [CLOSE]
/Equipment/CAEN_V1740/Common [OPEN]
/Equipment/CAEN_V1740/Statistics [OPEN]
/Equipmemt/CANE_V1740/Settings [CLOSE]

What I don't know for sure is if it is expected to have Common and Statistics still open when frontend_init() is called. Also, I don't understand if the recursive check for open links is really necessary!?! If yes then the small example just in from of the db_create_record() code makes no sense! As well as the example in the documentation!

I have checked with a debugger that my database is not somehow corrupted and indeed is the framework that opens Common and Statistics. To me everything looks fine in the sense that I think it is expected to have these two keys open!

This is the output from a changed odb.c to debug:

exodaq@lheppc78:~/daq/xlr$ cat ~/xlr/midas.log
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Found read access to Common
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Found read access to Statistics
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Program Charge Frontend on host lheppc78 started
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Found read access to Common
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Found read access to Common
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Found read access to Common
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Found read access to Common
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Found read access to Common
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Found write to notify_count for Common
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Found read access to Common
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] found an open key with message /Equipment/CAEN_V1740/Common open 1 times by "Charge Frontend"
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Found read access to Common
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] found an open key with message /Equipment/CAEN_V1740/Common open 1 times by "Charge Frontend"
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Found read access to Statistics
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] status after creating record for the statistics
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Found read access to Statistics
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Has created successfully a statistics record
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Found read access to Statistics
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Found read access to Statistics
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Found read access to Statistics
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Found read access to Statistics
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Found write to notify_count for Statistics
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Has open successfully a statistics record
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] successfully terminated equipment registration
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Found read access to Common
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] found an open key with message /Equipment/CAEN_V1740/Common open 1 times by "Charge Frontend"
Tue May 27 11:08:05 2014 [Charge Frontend,ERROR] [odb.c:8869:check_open_keys,ERROR] Found open key named Common
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Found read access to Statistics
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] found an open key with message /Equipment/CAEN_V1740/Statistics open 1 times by "Charge Frontend"
Tue May 27 11:08:05 2014 [Charge Frontend,ERROR] [odb.c:8869:check_open_keys,ERROR] Found open key named Statistics

Tue May 27 11:08:05 2014 [Charge Frontend,ERROR] [frontend.c:151:Charge Frontend,ERROR] could not create record for the V1740 DAQ configuration
Tue May 27 11:08:05 2014 [Charge Frontend,ERROR] [frontend.c:152:Charge Frontend,ERROR] call to db_create_record returned 320
Tue May 27 11:08:05 2014 [Charge Frontend,ERROR] [frontend.c:154:Charge Frontend,ERROR] experiment name is <|xlr|>
Tue May 27 11:08:05 2014 [Charge Frontend,INFO] Program Charge Frontend on host lheppc78 stopped
          Reply  06 Nov 2014, Stefan Ritt, Forum, Weird problem on new installation 

Razvan Stefan Gornea wrote:
In my case I have the following structure in ODB right before the framework calls frontend_init():
/Equipment/CAEN_V1740 [CLOSE]
/Equipment/CAEN_V1740/Variables [CLOSE]
/Equipment/CAEN_V1740/Common [OPEN]
/Equipment/CAEN_V1740/Statistics [OPEN]
/Equipmemt/CANE_V1740/Settings [CLOSE]



Sorry my late reply, but I could only find today to have a look at this.

It is absolutely ok to have the Common and Statistics subtrees in the ODB open. So if anybody modifies anything in the Common tree for example, the frontend gets notified directly via the hot link mechanism. Having a subtree "open" however means that the structure of that tree may not be changed, since it's directly mapped onto a fixed C structure. If you create a subtree via the db_create_record() function, you modify the structure of that tree, and thus it may not be open by other clients.

Your problem can be fixed if you create the /Equipment/CAEN_V1740/Settings tree (which is not open), instead the full /Equipment/CAEN_V1740 tree, which contains the open Common and Statistics subtrees.

Best regards,
Stefan
Entry  26 May 2014, Clemens Sauerzopf, Forum, Running a frontend on Arduino Yun 
Hello,

I'm trying to get a frontend running on an arduino yun single board computer
(cpu is Atheros AR9331 and OS is a linux derivate
http://arduino.cc/en/Main/ArduinoBoardYun ) 

The idea is to use this device for some slow control for our experiment (ASACUSA
Antihydrogen) we are using midas as main DAQ system and we would like to
integrate the slow control with this small boards. My question is: How can I
compile the midas library with the openwrt crosscompiler? the system discspace
is very limited (6 MB) therefore I don't want to have mysql, zlib an so on.
Other software can be stored on an sd-card.

 In the end what I would need is only creating hotlinks to the odb on our server
to get and report the current and desired values.

Do you have any suggestions on how to realize something like that?

Thanks!
    Reply  26 May 2014, Konstantin Olchanski, Forum, Running a frontend on Arduino Yun 
> I'm trying to get a frontend running on an arduino yun single board computer
> (cpu is Atheros AR9331 and OS is a linux derivate
> http://arduino.cc/en/Main/ArduinoBoardYun )

What you want to do should be possible.

Here, the smallest machine we used to run a MIDAS frontend was a 300MHz PowerPC processor inside a 
Virtex4 FPGA with 256 Mbytes of RAM. Looks like your machine is a 400MHz MIPS with 64 Mbytes of RAM 
so there should be enough hardware available to run a MIDAS frontend underLinux.

One source of trouble could be if your MIPS CPU is running in big-endian mode (MIPS can do either big-
endian or little-endian). MIDAS supports big-endian frontends connecting to little-endian x86 PC hosts, 
but with big-endian machines getting less common, this code does not get much testing. If you run into 
trouble with this, please let us know and we will fix it for you.

> The idea is to use this device for some slow control for our experiment (ASACUSA
> Antihydrogen) we are using midas as main DAQ system and we would like to
> integrate the slow control with this small boards.

> My question is: How can I compile the midas library with the openwrt crosscompiler?

In the MIDAS Makefile, looks for the "crosscompile" target which we use to cross-build MIDAS for our 
PowerPC target using the regular GCC cross compiler chain. If you have very new MIDAS, you will also see 
some make targets for ARM Linux machines, also using GCC cross compilers.

> the system discspace is very limited (6 MB) therefore I don't want to have mysql, zlib an so on.

The MIDAS Makefile crosscompiler builds a very minimalistic version of MIDAS - no mysql, no sqlite, etc 
requirements for the MIDAS libraries and frontend. zlib may be required but it is not used by frontend 
code, so you may try to disable it.

If that is still too big, there is a possibility for building a super-minimal version of MIDAS just for running 
cross-compiled frontends. We use this function to build MIDAS for VxWorks. If you want to try that, I 
think it is not in the main Makefile, but in the VxWorks Makefile. Let me know if you want this and I can 
probable restore this function into the main Makefile fairly quickly.

> Do you have any suggestions on how to realize something like that?

1) cross compile MIDAS (see  the Makefile "make crosscompile" target)
2) cross compile your frontend
3) run it, with luck, it will fit into your 64 Mbytes of RAM

If you run into problems, please post them here (so other people can see the problems and the solutions)

K.O.
       Reply  27 May 2014, Clemens Sauerzopf, Forum, Running a frontend on Arduino Yun 
Ok, I'm currently trying to get things running, setting up a crosscompiler toolchain for the Arduino Yun is fairly
easy, just follow the tutorial on the  OpenWrt webpage.

The main problem is that openwrt uses the uClibc library instead of glibc this produces lots of difficulties, first
one is that building of the shared library is complaining about symbol name mismatches, but I guess this can be
fixed somehow, I wont use the midas-shared library, therefore I just disabled it in the Makefile. 

The next problem is the backtrace functions tjhat are used within system.c, the functions backtrace and
backtrace_symbols are only available in glibc for a quick fix I just changed the #ifdef directive in a way that this
code is not built. 
 

There is a more tricky problem, the compiler complains about mismatched function defintions:

In file included from include/midasinc.h:17:0,
                 from include/msystem.h:35,
                 from src/sequencer.cxx:13:
/home/clemens/arduino/openwrt-yun/build_dir/toolchain-mips_r2_gcc-4.6-linaro_uClibc-0.9.33.2/uClibc-0.9.33.2/include/string.h:495:41:
error: declaration of 'size_t strlcat(char*, const char*, size_t) throw ()' has a different exception specifier
include/midas.h:1955:17: error: from previous declaration 'size_t strlcat(char*, const char*, size_t)'
/home/clemens/arduino/openwrt-yun/build_dir/toolchain-mips_r2_gcc-4.6-linaro_uClibc-0.9.33.2/uClibc-0.9.33.2/include/string.h:498:41:
error: declaration of 'size_t strlcpy(char*, const char*, size_t) throw ()' has a different exception specifier
include/midas.h:1954:17: error: from previous declaration 'size_t strlcpy(char*, const char*, size_t)'

This can be solved by editing the midas.h file:
size_t EXPRT strlcpy(char *dst, const char *src, size_t size); -> size_t EXPRT strlcpy(char *dst, const char *src,
size_t size) __THROW __nonnull ((1, 2));

and 

size_t EXPRT strlcat(char *dst, const char *src, size_t size); -> size_t EXPRT strlcat(char *dst, const char *src,
size_t size) __THROW __nonnull ((1, 2));

the same trick has to be done in ../mxml/strlcpy.h

After changing this midas compiles with the crosscompiler and the resulting programs are executable on the Arduino
Yun. I'll report back if I got my frontend to run and connect to the midas server.
          Reply  27 May 2014, Konstantin Olchanski, Forum, Running a frontend on Arduino Yun 
> Ok, I'm currently trying to get things running, setting up a crosscompiler toolchain for the Arduino Yun is fairly
> easy, just follow the tutorial on the  OpenWrt webpage.
> 
> The main problem is that openwrt uses the uClibc library instead of glibc this produces lots of difficulties
>

Okey, I see. I do not think we used uClibc with MIDAS yet.

>
> one is that building of the shared library is complaining about symbol name mismatches, but I guess this can be
> fixed somehow, I wont use the midas-shared library, therefore I just disabled it in the Makefile. 
> 

The shared library is generally not used. The Makefile builds it as a convenience for things like pymidas, etc.

> 
> The next problem is the backtrace functions tjhat are used within system.c, the functions backtrace and
> backtrace_symbols are only available in glibc for a quick fix I just changed the #ifdef directive in a way that this
> code is not built.
>

Yes. They should probably be behind an #ifdef GLIBC (whatever the GLIBC identifier is)

> 
> There is a more tricky problem, the compiler complains about mismatched function defintions:
> 
> error: declaration of 'size_t strlcat(char*, const char*, size_t) throw ()' has a different exception specifier
> error: declaration of 'size_t strlcpy(char*, const char*, size_t) throw ()' has a different exception specifier
> 
> This can be solved by editing the midas.h file:
> size_t EXPRT strlcpy(char *dst, const char *src, size_t size); -> size_t EXPRT strlcpy(char *dst, const char *src,
> size_t size) __THROW __nonnull ((1, 2));
> 

No need to edit anything, this is controlled by NEED_STRLCPY in the Makefile - to enable our own strlcpy on systems that do not provide it (hello, GLIBC!)

> 
> After changing this midas compiles with the crosscompiler and the resulting programs are executable on the Arduino
> Yun. I'll report back if I got my frontend to run and connect to the midas server.

Congratulations!

K.O.
             Reply  28 May 2014, Clemens Sauerzopf, Forum, Running a frontend on Arduino Yun 
Thank you very much for your input, it finally works. I succeeded in crosscompiling the frontend and running it on the ArduinoYun. The 64 MB RAM is more than
enough to run the mserver and a frontend and connect to a remote midas server over ethernet or wifi. 

Yust for reference if someone tries something similar: to directly access the serial interface between the Linux running processor and the Atmel processor it
is required to comment out a line in /etc/inittab: #ttyATH0::askfirst:/bin/ash --login
 this line starts a shell on the serial connection, by preventing this it is possible to run more or less unmodified code (serial interface needs to be
Serial1) on the Atmel side and use the linux processor as slow control pc.

Thanks again for your help!
                Reply  24 Oct 2014, Clemens Sauerzopf, Forum, Running a frontend on Arduino Yun 
Hello,

I'm currently trying to create a midas bank for basic temperature reading from the Arduino Yun, but when creating a bank the frontend crashed with a segfault, my
code currently looks like this:

INT read_event(char *pevent, INT off)
{
  WORD *data;
  //printf("before init\n");
  bk_init(pevent);
  //printf("after init\n");
  bk_create(pevent, "TEM0", TID_WORD, data); // <= we are dieing at this line
  //printf("after create\n");

  bk_close(pevent, data);

  return bk_size(pevent);
}

Does anyone have an Idea how to tackle this problem down? running a debugger is a little bit tricky on a this processor..

Thanks!
                   Reply  24 Oct 2014, Stefan Ritt, Forum, Running a frontend on Arduino Yun 
> Hello,
> 
> I'm currently trying to create a midas bank for basic temperature reading from the Arduino Yun, but when creating a bank the frontend crashed with a segfault, my
> code currently looks like this:
> 
> INT read_event(char *pevent, INT off)
> {
>   WORD *data;
>   //printf("before init\n");
>   bk_init(pevent);
>   //printf("after init\n");
>   bk_create(pevent, "TEM0", TID_WORD, data); // <= we are dieing at this line
>   //printf("after create\n");
> 
>   bk_close(pevent, data);
> 
>   return bk_size(pevent);
> }
> 
> Does anyone have an Idea how to tackle this problem down? running a debugger is a little bit tricky on a this processor..
> 
> Thanks!

Two bugs:

bk_create(pevent, "TEMO0", TID_WORD, &data);

note the "&" in front of data. Then you have to increment the pointer for each byte you add to the bank:

  *data = <temp>;
  data++;
  bk_close(pevent, data);

this way the bk_close() function know how much data you added to the bank.

Cheers,
Stefan
                   Reply  24 Oct 2014, Konstantin Olchanski, Forum, Running a frontend on Arduino Yun 
> INT read_event(char *pevent, INT off)
> {
>   WORD *data;
>   bk_create(pevent, "TEM0", TID_WORD, data); // <= we are dieing at this line
> }

The declaration of bk_create() in midas.h is wrong:

void EXPRT bk_create(void *pbh, const char *name, WORD type, void *pdata);
should be
void EXPRT bk_create(void *pbh, const char *name, WORD type, void **pdata);

Notice the extra "*" in "void**pdata" to indicate that it takes a pointer to the pointer to the data.

With the correct definition, you should get a compile error (type mismatch).

With the wrong current definition, you should have gotten a warning about "use of uninitialized variable 'data'", but some compilers with some settings do not generate this warning.

As it is, without looking at an example (highly recommended) and reading documentation (do we even have a "frontend writing guide"?!?) you have
no way to tell if you should pass "data" or "&data" to bk_create().

Thank you for reporting this problem.

P.S. As for running on Arduino, for slow controls type application, any CPU and network speed should be okey,
but memory use is always a concern, so please speak up if you run into problems. We routinely run MIDAS frontends
on linux machines with 512M and 128M RAM (1GHz CPU, 100 and 1000 M/s ethernet).

K.O.
                      Reply  02 Nov 2014, Stefan Ritt, Forum, Running a frontend on Arduino Yun 
> With the correct definition, you should get a compile error (type mismatch).
> 
> With the wrong current definition, you should have gotten a warning about "use of uninitialized variable 'data'", but some compilers with some settings do not generate this warning.

I redefined the definition of the bk_create function to contain a void **pdate pointer, but that did not really help. Now I get a compiler error:

"Incompatible pointer type passing 'DWORD **' to parameter of type 'void **', so I need an explicit cast each time

bk_create(... (void **)&pdata);

But I think this is better than what we had before so I leave it. Please note that all front-ends using bk_create need to be modified accordingly to suppress this warning.

/Stefan
Entry  14 Oct 2014, Konstantin Olchanski, Bug Report, Hostile network scans against MIDAS RPC ports 
At CERN I see a large number of hostile network scans that seem to be injecting HTTP requests into the 
MIDAS RPC ports. So far, all these requests seem to be successfully rejected without crashing anything, but 
they do clog up midas.log.

The main problem here is that all MIDAS programs have at least one TCP socket open where they listen for 
RPC commands, such as "start of run", "please shutdown", etc. The port numbers of these sockets are 
randomized and that makes them difficult to protect them with firewall rules (firewall rules like fixed port 
numbers).

Note that this is different from the hostile network scans that I have first seen maybe 5 years ago that 
affected the mserver main listener socket. Then, as a solution, I hardened the RPC receiver code against 
bad data (and happy to see that this hardening is still holding up) and implemented the mserver "-A" 
command switch to specify a list of permitted peers. Also mserver uses a fixed port number ("-p" switch) 
and is easy to protect with firewall rules.

Since these ports cannot be protected by OS means (firewall, etc), we have to protect them in MIDAS.

One solution is to reject all connections from unauthorized peers.

One way to use this is to implement the "-A" switch to explicitely list all permitted peers, these switch will 
ave to be added to all long running midas programs (mhttpd, mlogger, mfe.c, etc). Not very practical, IMO.

Another way is to read the list of permitted peers from ODB, at startup time, or each time a new connection 
is made.

In the latter case, care needs to be taken to avoid deadlocks. For example remote programs that read ODB 
through the mserver may deadlock if the same mserver is the one trying to establish the RPC connection. 
Or if ODB is somehow locked.

NB - we already keep a list of permitted peers in ODB /Experiment/Security.

K.O.
    Reply  14 Oct 2014, Stefan Ritt, Bug Report, Hostile network scans against MIDAS RPC ports 
Doing this through the ODB seems ok to me. If the ODB cannot be accessed, you can fall back to no protection.

At PSI we fortunately do not have these network scans because PSI uses a institute-wide firewall. So you can connect from outside PSI to inside PSI only 
on certain well-defined ports (like SSH to certain machines). You can do the same in Alpha. Use one computer as a router with two network cards, where 
the DAQ network runs on the second card as a private network. Then program the routing tables in that gateway such that only certain ports can be 
accessed from outside, like port 8080 to mhttpd. This way you block all except the things which are needed.

/Stefan
       Reply  16 Oct 2014, Konstantin Olchanski, Bug Report, Hostile network scans against MIDAS RPC ports 
> Doing this through the ODB seems ok to me. If the ODB cannot be accessed, you can fall back to no protection.
> 
> At PSI we fortunately do not have these network scans because PSI uses a institute-wide firewall.
>

Same here at TRIUMF, no problems with hostile network activity. Only see this trouble at CERN. Nominally CERN also have
everything behind the CERN firewall, that is why I tend to think that I am seeing network scans done by CERN security people,
or some badniks on the CERN local network (PC malware, etc).

> So you can connect from outside PSI to inside PSI only 
> on certain well-defined ports (like SSH to certain machines). You can do the same in Alpha. Use one computer as a router with two network cards, where 
> the DAQ network runs on the second card as a private network. Then program the routing tables in that gateway such that only certain ports can be 
> accessed from outside, like port 8080 to mhttpd. This way you block all except the things which are needed.

Yes, this is how we did it for DEAP at SNOLAB. No network trouble there.

But generically for MIDAS, I think we should have built-in capability for MIDAS to protect itself without reliance on OS-level means (local firewall)
or network-level means ("site firewalls").

Sometimes we have very small MIDAS installations, i.e. just one machine by itself, and such setups should be secure/secured easily -
too much work to setup an external firewall box just for one machine and OS-level firewall rules sometimes conflict
with some OS services (i.e. NIS) (I am still waiting for the "NIS to LDAP migration for dummies" guide).

K.O.
          Reply  16 Oct 2014, Stefan Ritt, Bug Report, Hostile network scans against MIDAS RPC ports 
> Sometimes we have very small MIDAS installations, i.e. just one machine by itself, and such setups should be secure/secured easily -
> too much work to setup an external firewall box just for one machine and OS-level firewall rules sometimes conflict
> with some OS services (i.e. NIS) (I am still waiting for the "NIS to LDAP migration for dummies" guide).

I fully agree with you. So if you find time to implement this, I will be more than happy.

/Stefan
Entry  14 Oct 2014, Konstantin Olchanski, Bug Report, Problem in mfe multithread equipments 
In the ALPHA experiment at CERN I found a problem in mfe.c handling of multithreaded equipments. This problem was in 
some forms introduced around May 2013 and around Aug 2013 (commit 
https://bitbucket.org/tmidas/midas/src/45984c35b4f7/src/mfe.c) (I hope I got it right).

The effect was very odd - if event rate of multithreaded equipment was more than 100 Hz, the event counters on the midas 
status page would not increment and the frontend will crash on end of run. Other than that, all the events from the 
multithreaded equipment seem to appear in the SYSTEM buffer and in the data file normally.

This happened: in mfe.c::receive_trigger_event() a loop was introduced (previously,
there was no loop there - there was and still is a loop outside of receive_trigger_event()):

while (1)
   wait 10 ms for an event
   process event, loop back
   if there is no event, exit
}

Obviously, if the event rate is more than 100 Hz (repetition rate less than 10 ms),
the 10 ms wait will always return an event and we will never exit this loop.

So the mfe.c main loop is now stuck here and will not process any periodic activity
such as updating the equipment statistics (event counters on the midas status page)
or running periodic equipments in the same front end program.

The crash at the end of run will be caused by a timeout in responding to the "end of run" RPC call.

I have a patch in testing that solves this problem by restoring receive_trigger_event() to the original configuration, i.e. 
https://bitbucket.org/tmidas/midas/src/6899b96a4f8177d4af92035cd84aadf5a7cbc875/src/mfe.c?at=develop

K.O.
    Reply  14 Oct 2014, Konstantin Olchanski, Bug Report, Problem in mfe multithread equipments 
For my reference:
good version: https://bitbucket.org/tmidas/midas/src/6899b96a4f8177d4af92035cd84aadf5a7cbc875/src/mfe.c?at=develop
first breakage: https://bitbucket.org/tmidas/midas/src/c60259d9a244bdcd296a8c5c6ab0b91de27f9905/src/mfe.c?at=develop
second breakage: https://bitbucket.org/tmidas/midas/src/45984c35b4f7257f90515f29116dec6fb46f2ebc/src/mfe.c?at=develop

The "first breakage" may actually be okey, because there the badnik loop loops over ring buffers, not infinite. But I cannot test it anymore.
K.O.
    Reply  15 Oct 2014, Stefan Ritt, Bug Report, Problem in mfe multithread equipments 
You are absolutely correct, the code is certainly wrong. It looks to me like the 

while (rbh)

was put in there for some testing, and I forgot to remove it. The only thing I could imagine is that we want to have a while loop there for performance reason. Like

readout_start = ss_millitime();
while (ss_millitime - readout_start < (DWORD) eq_info->period) {
  read event
  return 0 if no event found
}

You find this code also in the check_polled_events() routine. It ensures that the routine does not return after every single event, but after the period defined in the 
equipment (which is usually 100 ms for polled events). This way the code is more efficiently, since we do not check for RPC calls between every event, but just 10 times 
per second. This way you can shovel more events through the system, while still being responsive to run stops.

I don't have any hardware right now to test this, so please put my code above into the routine and commit it if it works.

I notice also a difference in both codes concerning the read buffer handles. The old code uses rbh2, while the new (wrong) code uses rbh. In your case probably both 
handles are the same, so it works, but in other experiments, which might use several ring buffers, it will fail. So please use rbh instead rbh2.

Let me know if it works for you, and if you see any difference in speed between the versions with and without the while loop (actually you will see this only if your trigger 
rate maxes out the DAQ).

Cheers,
Stefan
       Reply  15 Oct 2014, Stefan Ritt, Bug Report, Problem in mfe multithread equipments 
Please disregard my previous posting, you don't need the while loop, since it's already in the scheduler (around lines 2160 under /*---- send interrupt events ----*/). 

But now I remember the rationale behind it. The loop over the rb[i] is because in MEG I have n calibration threads, each one running on a separate CPU core. So the receive_trigger_event() routine has to collect events from all the 
threads, each of them having one ring buffer. In the process of implementing EQ_USER, I changed this somehow, and apparently broke the code by making the while() loop looping forever if the event rate is over 100 Hz.

So for the moment please remove the while loop completely, and I will worry later of putting it back correctly when MEG will start again next year.

/Stefan
    Reply  16 Oct 2014, Stefan Ritt, Bug Report, Problem in mfe multithread equipments 
> while (1)
>    wait 10 ms for an event
>    process event, loop back
>    if there is no event, exit
> }


This code has been rewritten now and should work for event rates >100 Hz.

/Stefan
Entry  14 Oct 2014, Konstantin Olchanski, Bug Report, Problem with EQ_USER 
If you use EQ_USER in mfe.c and have multiple threads writing into the ring buffer, you will have a big 
problem - the thread locking in the ring buffer code only works for a single writer thread and a single 
reader thread.

Presently, it is not clear how to have multiple multithreaded equipments inside one frontend.

During the Summer of 2013 code briefly existed in mfe.c to have an array of ring buffers and each 
multithreaded equipment could write into it's own buffer.

But this code is now removed and mfe.c can only read from a single ring buffer and as I noted above, ring 
buffer locking requires that only a single thread writes into it.
K.O.
    Reply  15 Oct 2014, Stefan Ritt, Bug Report, Problem with EQ_USER 
Sure, each thread needs its own ring buffer for writing.

So I see that we need back the multiple-ring-buffer-readout-scheme even before MEG will start. So what you need is something like

for (i=0 ; rb[i] != 0 ; i++) {
  read event from rb[i];
}

as it was before. What I do not like is that rb is a global variable, we should better use the encapsulation functions and extend get_event_rb() to 
get_event_rb(i) so you can have n ring buffers.

Give me one day, I will extend the current code to make it work again and to implement N threads.

Cheers,
Stefan
    Reply  16 Oct 2014, Stefan Ritt, Bug Report, Problem with EQ_USER 
I restructured the front-end code to enable multiple readout threads for EQ_USER equipment. Last summer I was definitively interrupted during 
that work and left it in an half finished state, sorry for that.

The way it works now is illustrated in mtfe.c. You create N ring buffers and N threads via

   for (int i=0 ; i<N ; i++) {
      create_event_rb(i);
      ss_thread_create(trigger_thread, (void*)(PTYPE)i);
   }

then each readout thread accesses its own readout buffer

thread(...)
{
   index = (int)(PTYPE)param;
   signal_readout_thread_active(index, TRUE);
   rbh = get_event_rbh(index);
   
   while (is_readout_thread_enabled()) {
      ... read event and put it into ring buffer ...
   }

   signal_readout_thread_active(index, FALSE);
}

The is_readout_thread_enabled() and signal_readout_thread_active() are used by the framework to shut down gracefully threads correct at the end 
of the program. This way each thread can close any hardware correctly.

Note that no other thread management is done by the framework. In the old days with interrupt equipment, the framework disabled interrupts 
when reading out periodic events, since that was necessary when using a single CAMAC crate for ADCs and scalers. This is obsolete now and not 
needed any longer. It is now the responsibility of the user code to resolve hardware access conflicts between different threads (like using a local 
mutex to access the same hardware). There is also no "readout when running" handling. If events should not be read out when the run is stopped, 
the readout thread has to check to run status, or better the EOR routine should disable the hardware trigger and the BOR routine should re-enable 
it. The readout threads will then poll for new events and just go to sleep if nothing is there.

I testes the mtfe.c program with 100 Hz and 1 MHz event rate on a dummy experiment (no hardware access) and it worked without problem.

Let me know if there is any issue left over.

/Stefan 
Entry  16 Jul 2014, Clemens Sauerzopf, Forum, CAEN V1742 midas driver midasdriver_v1742.tar.gzanalyzerfunctions.tar.gz
Hello all, 

as discussed in the thread about Interrupt triggered readout
(https://midas.triumf.ca/elog/Midas/1016) I send you out driver for the CAEN
V1742 modules.

The code is separated into two different parts, first the real midas driver
(attachment 1).
Here the non trivial part is reading the modules internal flash pages to get to
correction patterns for the DRS4 chips, this is not documented in the manual.

The functions to apply the correction patters to the data is in the second
archive (attachment 2). I have to say this is C++ code as we use this with rootana.

The driver including the signal correction was used for data taking in 2012 with
4 synchronized V1742 modules for Antihydrogen experiment by the ASACUSA
collaboration at cern. We'll use it gain this year.

I hope the archives contain all necessary information, some parts were
distributed in various files..

Cheers,
Clemens

EDIT: the driver is based on the v1740 driver
    Reply  08 Sep 2014, Clemens Sauerzopf, Forum, CAEN V1742 midas driver 
Hello all,

As an addition to the driver functions I uploaded in this thread I would also have a
C++ class that handles everything for the V1742 modules and can be directly used
integrated into a C++ frontend. 

I would like to ask if you have policy for user supplied code like this? It's not a low
level driver but a frontend module that reads and controls the module, creates odb
hotlinks and handles the bank creating and storing of the data.

Best regards,
Clemens

EDIT: the question is, do you like to have  codes like this collected somewhere for
example this forum or would you prefer if I would post a link to some online repository =
Entry  11 Jul 2014, Konstantin Olchanski, Info, MIDAS high speed test 
We have tested operation of MIDAS using a 10GigE network connection. Using a dummy frontend 
generating fake data, we can record MIDAS data to disk at at least 700 Mbytes/sec as reported by 
the MIDAS status page.

Two configurations were tested, both run at at least 700Mbytes/sec sustained:

1) MIDAS mhttpd, mserver, mlogger running on the disk server machine (mlogger writes to local 
disk), frontend running on remote machine (10GigE mserver connection).
2) MIDAS mhttpd, mserver, mlogger, frontend running on remote machine (mlogger writes data to 
an NFS-mounted disk over a 10GigE connection).

In addition, for configuration (2), I simulated online analysis reading fresh midas files at the same 
time as MIDAS writes new data. The resulting observation is that Linux seems to be giving main 
priority to disk write traffic (700 Mbytes/sec) with the remaining disk bandwidth given to read traffic 
(50-100Mbytes/sec). In other words, when running online data analysis on fresh data files, mlogger 
continues to run at full speed (analysis does not slow down data taking).

A few problems with MIDAS were observed during this test:

a) mlogger data compression using gzip-1 has to be turned off (limits data rate to about 
200Mbytes/sec). We plan to implement high speed LZO/LZ4 data compression that we expect to 
keep up with a 10GigE network interface.
b) CPU use by mserver and mlogger is rather high (about 40% CPU)
c) when writing to the NFS disk, mlogger has a pause of 1-2 seconds when closing and reopening 
subrun data files. To avoid a interruption in data taking, the SYSTEM event buffer has to be big 
enough to ride through this pause, but stock MIDAS limits the maximum size of event buffer to 1GB 
(too small), this can be easily increased to 2GB (almost big enough) and with some more work it can 
be increased to 4GB, but no more because the buffer length is a 32-bit integer.
d) when writing to the NFS disk, we also see periodic 3-5 second interruptions ("write operation took 
5123 ms") and we had one death of mlogger by a timeout of 60 sec.

Details of the hardware:

1) the disk server machine CPU is 3.4GHz Intel i7-4770, mobo is ASUS Z87 WS (10 SATA, 2xGigE), 
RAM is 32GB DDR3-1600.
2) disk array is 8x4TB Seagate ST4000VN000-1H4168 NAS disks RAID0 (striped) configuration, raw 
data read/write rate is around 1 GByte/sec, disks are directly attached to mobo (no raid card), linux 
software raid.
3) the frontend machine CPU is 3.7GHz Intel i7-4820, mobo is ASUS P9X79 WS, RAM is 32GB DDR3-
1600.
4) 10GigE network is Solarflare Communications SFC9120 (both machines) with a cross-over fiber 
cable (direct connection,no switches)
5) OS is up-to-date SL6.5 (both machines)

K.O.
    Reply  06 Aug 2014, Konstantin Olchanski, Info, MIDAS high speed test 
> We have tested operation of MIDAS using a 10GigE network connection. Using a dummy frontend 
> generating fake data, we can record MIDAS data to disk at at least 700 Mbytes/sec as reported by 
> the MIDAS status page.
>
> Details of the hardware:
> 
> 1) the disk server machine CPU is 3.4GHz Intel i7-4770, mobo is ASUS Z87 WS (10 SATA, 2xGigE), 
> RAM is 32GB DDR3-1600.
> 2) disk array is 8x4TB Seagate ST4000VN000-1H4168 NAS disks RAID0 (striped) configuration, raw 
> data read/write rate is around 1 GByte/sec, disks are directly attached to mobo (no raid card), linux 
> software raid.
>

These tests were done using a raid0 array (striped), which is not suitable for production use.

For production use, RAID5 and RAID6 is recommended. But their default configuration has severely reduced performance (50% of 
RAID0) this is because internally the raid driver issues disk read operations that compete against and severely slow down the disk write 
requests. This is easy to see with "iostat -x 1" - when writing to the raid array, there should be no reads from the disks. Following 
changes are required to achieve maximum performance:

echo 32000 > /sys/block/md6/md/stripe_cache_size # increase internal memory buffers - because "raid write" is always "read-
modify-write", bigger buffers ensure that the reads are done from cache, not from phsyical disk
mdadm --grow --bitmap=/md6bitmap /dev/md6 # use external bitmap - if bitmap is internal, there is a large number of disk reads 
competing against writes. external bitmap seems to help quite a bit.

With these settings, my RAID6 array can read and write at about 700-900 Mbytes/sec - this is comparable to RAID0 (minus 2 disks).

With this, I repeated the MIDAS performance tests - (but without 10GigE) - MIDAS can write 700 Mbytes/sec of fake data to a local 
RAID6 data array. (hardware configuration is listed above).

K.O.
Entry  10 Jul 2014, Clemens Sauerzopf, Forum, Adding Interrupt handling to SIS3100 driver 
Hello,

we are using the Struck SIS 3100 VME interface for our experiment, but the midas
driver doesn't have interrupt control integrated. Previously we were happy with
just periodic readout, but our requirements have changed so I thought I could
just implement this as there is a demo program provided by Struck on how to use
their driver with interrupts.

Could you recommend an existing midas driver that has a good implementation of
the midas interrupt functions (mvme_interrupt_*) just for me too use as a guideline?

Best regards,
Clemens Sauerzopf
    Reply  11 Jul 2014, Pierre-Andre Amaudruz, Forum, Adding Interrupt handling to SIS3100 driver 
> Hello,
> 
> we are using the Struck SIS 3100 VME interface for our experiment, but the midas
> driver doesn't have interrupt control integrated. Previously we were happy with
> just periodic readout, but our requirements have changed so I thought I could
> just implement this as there is a demo program provided by Struck on how to use
> their driver with interrupts.
> 
> Could you recommend an existing midas driver that has a good implementation of
> the midas interrupt functions (mvme_interrupt_*) just for me too use as a guideline?
> 
> Best regards,
> Clemens Sauerzopf

Hi Clemens,

We did have interrupt handling at some point under VxWorks and later with Linux, but it 
has always been a challenge.
As you may have found, the current frontend (mfe.c) still has some code to that purpose. 
But I wouldn't guarantee that recent development related to multi-threading didn't 
affect the expected interrupt operation (not been tested).

Now-a-days, I would suggest that you encapsulate your interrupt handling function based 
on the provided software into a independent thread started by a standard midas frontend. 
While the main frontend task could operate a periodic equipment as you've done so far, a 
polling equipment would poll on the data availability from the ring buffer. The readout 
function would compose the appropriate data bank.

This method has the advantage to decouple all the interrupt timing/restriction related 
issues from midas and run a conventional frontend. The ring buffer functions are part of 
midas (rb_...()).
Example for multi-threading can be found in examples/mtfe which include the use of the 
ring buffer as well.

Cheers, PAA
       Reply  14 Jul 2014, Clemens Sauerzopf, Forum, Adding Interrupt handling to SIS3100 driver 
Hi Pierre-Andre,

thanks for your comments. If I understand you correctly you are advising to separate the
triggering based on the interrupt signal and the actual data readout. In principal wouldn't
it be also possible to facilitate the multi-threading equipment type to poll the trigger
signal? Then veto new triggers and start the readout of the different detector modules by a
"manual trigger" ?

I'll check the example you've recommended to compare the different solutions.

By the way I've written a driver for the CAEN V1742 VME module, it's working but the code is
currently not in a "nice" state. but if you are interested I could provide the driver code.

Cheers,
Clemens

> > Hello,
> > 
> > we are using the Struck SIS 3100 VME interface for our experiment, but the midas
> > driver doesn't have interrupt control integrated. Previously we were happy with
> > just periodic readout, but our requirements have changed so I thought I could
> > just implement this as there is a demo program provided by Struck on how to use
> > their driver with interrupts.
> > 
> > Could you recommend an existing midas driver that has a good implementation of
> > the midas interrupt functions (mvme_interrupt_*) just for me too use as a guideline?
> > 
> > Best regards,
> > Clemens Sauerzopf
> 
> Hi Clemens,
> 
> We did have interrupt handling at some point under VxWorks and later with Linux, but it 
> has always been a challenge.
> As you may have found, the current frontend (mfe.c) still has some code to that purpose. 
> But I wouldn't guarantee that recent development related to multi-threading didn't 
> affect the expected interrupt operation (not been tested).
> 
> Now-a-days, I would suggest that you encapsulate your interrupt handling function based 
> on the provided software into a independent thread started by a standard midas frontend. 
> While the main frontend task could operate a periodic equipment as you've done so far, a 
> polling equipment would poll on the data availability from the ring buffer. The readout 
> function would compose the appropriate data bank.
> 
> This method has the advantage to decouple all the interrupt timing/restriction related 
> issues from midas and run a conventional frontend. The ring buffer functions are part of 
> midas (rb_...()).
> Example for multi-threading can be found in examples/mtfe which include the use of the 
> ring buffer as well.
> 
> Cheers, PAA
          Reply  15 Jul 2014, Pierre-Andre Amaudruz, Forum, Adding Interrupt handling to SIS3100 driver 
Hello Clemens,

The hardware readout is triggered by the interrupt within this thread. The main thread poll on  
the data availability (from the rb) to filter/compose the frontend event.
In a similar multi-threaded implementation presently used in a dark matter experiment we start 
as many thread as necessary to constantly poll on the hardware for "data fragment" collection.
The event composition is done in the main thread through polling on the RBs.

Depending on the trigger rate and readout time, we can afford to analyze the data fragment at 
the thread level and add computed/summary information to the ring buffer on a event-by-event 
basis. This facilitate the overall event filtering happening later on in our event builder. 

"polling the trigger signal?", I don't understand. You can poll on the trigger condition but 
then you don't need interrupt.

The original Midas interrupt implementation was to let the interrupt function set a acknowledge 
flag which is picked up by the standard midas polling function (user code) for triggering the 
readout. This method ensure a minimal time spent in the IRQ and works fine for a single thread.

In regards of the CAEN V1742, we do have a VME driver for it, but it hasn't been added to the 
Midas yet (quite recent), but please don't hesitate to send us a copy.

Cheers, PAA

 

> Hi Pierre-Andre,
> 
> thanks for your comments. If I understand you correctly you are advising to separate the
> triggering based on the interrupt signal and the actual data readout. In principal wouldn't
> it be also possible to facilitate the multi-threading equipment type to poll the trigger
> signal? Then veto new triggers and start the readout of the different detector modules by a
> "manual trigger" ?
> 
> I'll check the example you've recommended to compare the different solutions.
> 
> By the way I've written a driver for the CAEN V1742 VME module, it's working but the code is
> currently not in a "nice" state. but if you are interested I could provide the driver code.
> 
> Cheers,
> Clemens
> 
> > > Hello,
> > > 
> > > we are using the Struck SIS 3100 VME interface for our experiment, but the midas
> > > driver doesn't have interrupt control integrated. Previously we were happy with
> > > just periodic readout, but our requirements have changed so I thought I could
> > > just implement this as there is a demo program provided by Struck on how to use
> > > their driver with interrupts.
> > > 
> > > Could you recommend an existing midas driver that has a good implementation of
> > > the midas interrupt functions (mvme_interrupt_*) just for me too use as a guideline?
> > > 
> > > Best regards,
> > > Clemens Sauerzopf
> > 
> > Hi Clemens,
> > 
> > We did have interrupt handling at some point under VxWorks and later with Linux, but it 
> > has always been a challenge.
> > As you may have found, the current frontend (mfe.c) still has some code to that purpose. 
> > But I wouldn't guarantee that recent development related to multi-threading didn't 
> > affect the expected interrupt operation (not been tested).
> > 
> > Now-a-days, I would suggest that you encapsulate your interrupt handling function based 
> > on the provided software into a independent thread started by a standard midas frontend. 
> > While the main frontend task could operate a periodic equipment as you've done so far, a 
> > polling equipment would poll on the data availability from the ring buffer. The readout 
> > function would compose the appropriate data bank.
> > 
> > This method has the advantage to decouple all the interrupt timing/restriction related 
> > issues from midas and run a conventional frontend. The ring buffer functions are part of 
> > midas (rb_...()).
> > Example for multi-threading can be found in examples/mtfe which include the use of the 
> > ring buffer as well.
> > 
> > Cheers, PAA
             Reply  06 Aug 2014, Clemens Sauerzopf, Forum, Adding Interrupt handling to SIS3100 driver sis3100.hhsis3100.hh
Hello Pierre-Andre,

thank you for your help with the interrupt handling. To close this case I'll
attach my interrupt
handling code for the SIS 3100 to this post as a reference. Maybe someone wants
to do something
similar in the future. 

I've decide to go for a C++ frontend therefore it is a class that handles
everything. The user only
has to provide a function pointer to the constructor that handles the interrupt
bitmask. The
interrupt handling is done with a timedwait within a separate thread. 

Cheers,
Clemens

> Hello Clemens,
> 
> The hardware readout is triggered by the interrupt within this thread. The
main thread poll on  
> the data availability (from the rb) to filter/compose the frontend event.
> In a similar multi-threaded implementation presently used in a dark matter
experiment we start 
> as many thread as necessary to constantly poll on the hardware for "data
fragment" collection.
> The event composition is done in the main thread through polling on the RBs.
> 
> Depending on the trigger rate and readout time, we can afford to analyze the
data fragment at 
> the thread level and add computed/summary information to the ring buffer on a
event-by-event 
> basis. This facilitate the overall event filtering happening later on in our
event builder. 
> 
> "polling the trigger signal?", I don't understand. You can poll on the trigger
condition but 
> then you don't need interrupt.
> 
> The original Midas interrupt implementation was to let the interrupt function
set a acknowledge 
> flag which is picked up by the standard midas polling function (user code) for
triggering the 
> readout. This method ensure a minimal time spent in the IRQ and works fine for
a single thread.
> 
> In regards of the CAEN V1742, we do have a VME driver for it, but it hasn't
been added to the 
> Midas yet (quite recent), but please don't hesitate to send us a copy.
> 
> Cheers, PAA
> 
Entry  07 Jul 2014, Ryu Sawada, Bug Report, mhist does not show history when -s option is used 
When I use -s option of mhist, it does not show history, for example.
mhist -s 140705 -p 140707 -e "HV".

And if I remove a line like,
diff --git a/utils/mhist.cxx b/utils/mhist.cxx
index 930de3b..10cc6ad 100755
--- a/utils/mhist.cxx
+++ b/utils/mhist.cxx
@@ -652,7 +652,6 @@ int main(int argc, char *argv[])
             else if (strncmp(argv[i], "-s", 2) == 0) {
                strcpy(start_name, argv[++i]);
                start_time = convert_time(argv[i]);
-               do_hst_file = true;
             } else if (strncmp(argv[i], "-p", 2) == 0)
                end_time = convert_time(argv[++i]);
             else if (strncmp(argv[i], "-t", 2) == 0)

It works.

Ryu Sawada
Entry  06 Jun 2014, Alexey Kalinin, Forum, problem with writing data on disk 
Hello,
Our experiment based on MIDAS 2.x DAQ.
I'm using several identical frontend-%d  with only lam source & event id changed, 
running on 2 computers(~3frontends per one).
Each recieve about 10k Events (Max_SIZE =8*1024, but usually it is less then 
sizeof(DWORD)*400) per 7sec.
With no mlogger running it works just fine, but when I'm starting mlogger (on 3-d 
computer with mserver running)... looking at ethernet stat graph first 2-3 spills 
goes well, with one peak per 7 sec, then it becomes junky and everithing crushed 
(mlogger and frontends).
I tried to increase SYSTEM buffer and restart everything. What I saw was Logger 
writes only half of recieved events from sum of frontends, it stays running for 
awhile ~15minutes. If I push STOP button  before crashing, mlogger continious 
writing data on disk enough priod of time.
I will try to look at disk usage for bad sectors @HDD, but may be there is an easy 
way to fix this problem and i did something wrong. 
structure of frontend has code like
EQ_POLLED , POLL for 500,

frontend_loop{
read big buffer with 10k events;bufferread=true;
}

poll_event{
for (i=0;i<count;i++){
 if (bufferread) lam=1;
 if (!test) return lam;
 }
return 0;
}

read_trigger{
bk_init32();
//fill event with buffer until current word!=0xffffffff
if (currentposition+2 >buffer_size) bufferread=false
}
|
Help needed, please. Suggestions.
Thanks, Alexey.
    Reply  16 Jun 2014, Alexey Kalinin, Forum, problem with writing data on disk 
Hello, once again.
What I found is when I tryed to stop the run, mlogger still working and writing some 
data, that i'm sure is not right, because frontend's are in stopped state
( for ex. every 3*frontend got 50k, mlogger showes 120k . Stop button pushed, but data 
in .mid file collect more then 150k~300k ev)
. And it continue writing until it crashes by the default waiting period 10s.
       Reply  18 Jun 2014, Alexey Kalinin, Forum, problem with writing data on disk 39.png
Hello, 
I'm in deppression.
I removed Everything from computer with mserver and reinstall system and midas.
Then I tried to run tutorial example.
Often run did not stop by pushing STOP button (mlogger stuck it, odbedit stop 
works)
After first START button pushed number of event taken by frontend equals mlogger 
events 
written. Next run (without mlogger restarting) mlogger double the number of 
events taken by 
frontend.(see attachment).Restarting mlogger fix this double counting.
What i've did wrong?
Entry  27 May 2014, Scott Oser, Suggestion, Saving ODB values in a sequencer script 
I have a possibly simple feature request for the MIDAS sequencer.  It would be
helpful to be able to save an ODB key's value to a variable, for later use, and
would be the analogue of the ODBSET command.  I had in mind an application where
a user wants to temporarily change some settings in the ODB, then restore the
ODB to its original values.  Maybe something like on ODBRead command:

<ODBRead path="/Path/ODBkey">varname</ODBRead>
<ODBSet path="/Path/ODBkey">0</ODBRead> 
<Wait for="events">3000</Wait>
<ODBSet path="/Path/ODBkey">$varname</ODBRead> 

(In which the key's value is saved to variable varname, then later written back
to the ODB.)

I'm open to other suggestions for simple ways to do this through the sequencer.

Thanks! 
    Reply  12 Jun 2014, Stefan Ritt, Suggestion, Saving ODB values in a sequencer script 
> I have a possibly simple feature request for the MIDAS sequencer.  It would be
> helpful to be able to save an ODB key's value to a variable, for later use, and
> would be the analogue of the ODBSET command.  I had in mind an application where
> a user wants to temporarily change some settings in the ODB, then restore the
> ODB to its original values.  Maybe something like on ODBRead command:

I implemented your request, committed the changed to GIT and updated the documentation. Now you can run 
things like:

ODBSET /System/tmp/test 1234 
ODBGET /System/tmp/test v 
MESSAGE $v 

(first you must create the key in the ODB manually).

Best regards,
Stefan
       Reply  12 Jun 2014, Scott Oser, Suggestion, Saving ODB values in a sequencer script 
Thanks, this seems very helpful, and we'll give it a try.

> > I have a possibly simple feature request for the MIDAS sequencer.  It would be
> > helpful to be able to save an ODB key's value to a variable, for later use, and
> > would be the analogue of the ODBSET command.  I had in mind an application where
> > a user wants to temporarily change some settings in the ODB, then restore the
> > ODB to its original values.  Maybe something like on ODBRead command:
> 
> I implemented your request, committed the changed to GIT and updated the documentation. Now you can run 
> things like:
> 
> ODBSET /System/tmp/test 1234 
> ODBGET /System/tmp/test v 
> MESSAGE $v 
> 
> (first you must create the key in the ODB manually).
> 
> Best regards,
> Stefan
Entry  26 May 2014, Dan Melconian, Suggestion, "Edit-on-end" would be nice 
We use the "Edit-on-start" and it's great.  But sometimes, something breaks
during the run, or you didn't realize you forgot to plug in a cable, or
whatever.  It'd be nice to have an "Edit-on-end" where you could prompt the user
to answer simple questions (like "Was this a good run?  [y/n]" or "Was the data
polarized?  [y/n]") and/or add a quick summary of what happened that run.


Thanks in advance,

Dan
    Reply  26 May 2014, Stefan Ritt, Suggestion, "Edit-on-end" would be nice 
We have similar demands, and we solve it in the following:

We use a run database. In the simplest case, this can be a text file which gets written at the end of the file. The 
mlogger has a built in SQL interface, so one can keep that table even inside a SQL interface. The per-run-
information then contains the run number, start/stop time, number of events, some run parameters and a "junk" 
flag. So if a run has a problem, one can set the junk flag by accessing the database (or text file) and setting this 
flag. In many cases you see that a run had a problem not at the end of the run, but a bit later. You mayby realize 
that the last two or three runs had the problem. With the run database approach, you can flag any run as "junk" 
later, which we need often, An edit-on-end would not make this possible.

So technically putting and edit-on-end is not a problem, but your life might be much easier if you use a run 
database as outlined above.

Best regards,
Stefan
Entry  28 Apr 2014, Tom Stuttard, Forum, Words written as zero in Midas bank 
Hi,

I am having some trouble with the data in my Midas bank. I am filling a midas
bank in my frontend (one of several in my system), and this bank is then being
added to the overall event by the event builder.

I check the data as it enters the bank, and also check again after I close the
bank in my frontend (using pdata's original value), and in both cases my data is
as I expect.

However, when I view the data in the .mid file (using mdump), there is a
problem. The correct number of words are there, and the values are correct up
until the 148th word. However, all subsequent words are 0.

I have also noticed that if I change my word size from 32bit (DWORD) to 16bit
(WORD), I observe the same behaviour except that now the first 296 words are
correct and all others are zero.

Note that other frontends in the system are not suffering this issue.

Does anyone have any ideas about how to solve this problem?
Entry  15 Apr 2014, Wes Gohn, Forum, C++11 error 
I am having some trouble creating a frontend that interacts with some libraries that use C++11. 

The flag I added to my MIDAS Makefile to get the C++11 part of the code to work is -std=c++0x. This 
causes an error in the equipment description in the frontend code.

The error I get is:

frontend.cpp:149: error: narrowing conversion of ‘-0x00000000000000001’ from ‘int’ to ‘WORD’ inside { 
}

This corresponds to the following in the MIDAS frontend code.

EQUIPMENT equipment[] = { 
  {
   "MWPC",                           /* equipment name */
   {1, TRIGGER_ALL,                  /* event ID, trigger mask */
     "BUF2",                        /* event buffer */
     EQ_POLLED | EQ_EB,              /* equipment type */
     LAM_SOURCE(0, 0xFFFFFF),        /* event source crate 0, all stations */
     "MIDAS",                        /* format */
     TRUE,                           /* enabled */
     RO_RUNNING,                     /* read only when running */
     1,                              /* poll for 1ms */
     0,                              /* stop run after this event limit */
     0,                              /* number of sub events */
     0,                              /* don't log history */
     "", "", "",},
    read_trigger_event,              /* readout routine */
  },

   {""}
};  <- this is line 149
#ifdef __cplusplus
}
#endif

Do you know a way to make this compatible with C++11?

Thanks!
    Reply  16 Apr 2014, Stefan Ritt, Forum, C++11 error 
> I am having some trouble creating a frontend that interacts with some libraries that use C++11. 
> 
> The flag I added to my MIDAS Makefile to get the C++11 part of the code to work is -std=c++0x. This 
> causes an error in the equipment description in the frontend code.
> 
> The error I get is:
> 
> frontend.cpp:149: error: narrowing conversion of ‘-0x00000000000000001’ from ‘int’ to ‘WORD’ inside { 
> }
> 
> This corresponds to the following in the MIDAS frontend code.
> 
> EQUIPMENT equipment[] = { 
>   {
>    "MWPC",                           /* equipment name */
>    {1, TRIGGER_ALL,                  /* event ID, trigger mask */
>      "BUF2",                        /* event buffer */
>      EQ_POLLED | EQ_EB,              /* equipment type */
>      LAM_SOURCE(0, 0xFFFFFF),        /* event source crate 0, all stations */
>      "MIDAS",                        /* format */
>      TRUE,                           /* enabled */
>      RO_RUNNING,                     /* read only when running */
>      1,                              /* poll for 1ms */
>      0,                              /* stop run after this event limit */
>      0,                              /* number of sub events */
>      0,                              /* don't log history */
>      "", "", "",},
>     read_trigger_event,              /* readout routine */
>   },
> 
>    {""}
> };  <- this is line 149
> #ifdef __cplusplus
> }
> #endif
> 
> Do you know a way to make this compatible with C++11?
> 
> Thanks!

Is this maybe related to the LAM_SOURCE(0, 0xFFFFFFFF) where 0xFFFFFFFF is -1. I guess you are not using CAMAC, so just replace the 
LAM_SOURCE(...) with zero and try again.

/Stefan
       Reply  16 Apr 2014, Wes Gohn, Forum, C++11 error 
> > I am having some trouble creating a frontend that interacts with some libraries that use C++11. 
> > 
> > The flag I added to my MIDAS Makefile to get the C++11 part of the code to work is -std=c++0x. This 
> > causes an error in the equipment description in the frontend code.
> > 
> > The error I get is:
> > 
> > frontend.cpp:149: error: narrowing conversion of ‘-0x00000000000000001’ from ‘int’ to ‘WORD’ inside { 
> > }
> > 
> > This corresponds to the following in the MIDAS frontend code.
> > 
> > EQUIPMENT equipment[] = { 
> >   {
> >    "MWPC",                           /* equipment name */
> >    {1, TRIGGER_ALL,                  /* event ID, trigger mask */
> >      "BUF2",                        /* event buffer */
> >      EQ_POLLED | EQ_EB,              /* equipment type */
> >      LAM_SOURCE(0, 0xFFFFFF),        /* event source crate 0, all stations */
> >      "MIDAS",                        /* format */
> >      TRUE,                           /* enabled */
> >      RO_RUNNING,                     /* read only when running */
> >      1,                              /* poll for 1ms */
> >      0,                              /* stop run after this event limit */
> >      0,                              /* number of sub events */
> >      0,                              /* don't log history */
> >      "", "", "",},
> >     read_trigger_event,              /* readout routine */
> >   },
> > 
> >    {""}
> > };  <- this is line 149
> > #ifdef __cplusplus
> > }
> > #endif
> > 
> > Do you know a way to make this compatible with C++11?
> > 
> > Thanks!
> 
> Is this maybe related to the LAM_SOURCE(0, 0xFFFFFFFF) where 0xFFFFFFFF is -1. I guess you are not using CAMAC, so just replace the 
> LAM_SOURCE(...) with zero and try again.
> 
> /Stefan

Thanks for the suggestion. It looks like it is instead the TRIGGER_ALL that is causing the problem. TRIGGER_ALL is defined as -1 in midas.h. If I replace TRIGGER_ALL with 0 in the 
frontend, it compiles, but if I use -1, I get the same error. I do not think that I want my trigger mask set to 0. Do you have a suggestion of how to get around this?

To answer the other questions, we are running on SLF6. I am building a frontend for a MWPC to read data from CAEN TDCs.
          Reply  17 Apr 2014, Stefan Ritt, Forum, C++11 error 
> Thanks for the suggestion. It looks like it is instead the TRIGGER_ALL that is causing the problem. TRIGGER_ALL is defined as -1 in midas.h. If I replace TRIGGER_ALL with 0 in the 
> frontend, it compiles, but if I use -1, I get the same error. I do not think that I want my trigger mask set to 0. Do you have a suggestion of how to get around this?

Ok, then it's clear. The trigger mask inside the EQUIPMENT_INFO is defined as 16-bit unsigned int (WORD). So the -1 gets expanded into a 64-bit signed int, then the compiler complains about truncating this to 16-bit. 

Just try instead TRIGGER_ALL to write

(WORD)(-1)

or even

0xFFFF

that should do the job. Basically you want all 16 bits to be "1" if yo do not use this feature.

Best regards,
Stefan
Entry  17 Mar 2014, Zhi Li, Forum, [need help] simple example frontend for CAEN VX1721  
Dear guys,

I’m Zhi Li from China, and I’m now working on my graduation project, which now
basically gets stuck in the part of preparing the frontend for my FADC (CAEN
VX1721) using Midas.

Now the current set-up includes a VME crate, a CAEN v2718 (Optical Bridge and
Controller) and a CAEN VX1721(8ch 8bit 500MS/s Waveform digitizer). The hardware
set-up has been finished and I could capture the analog waveform using CAEN
software(wavedump). 

Could anyone please tell me what are the basic things to do for using MIDAS?
I’ve installed MIDAS in PC and it works well for CAMAC, but do I need any extra
hardware module on using VME crate? Also, how to download
Universe-II VME driver?

Thanks,
Li
    Reply  17 Mar 2014, Pierre-Andre Amaudruz, Forum, [need help] simple example frontend for CAEN VX1721  
Hi Li,

You mention that you've got the wavedump working. It suggests that you have a A3818 
interface, can you confirm that?

If so, you can make a Midas frontend using the CAEN libraries to access your VX1721. I can provide you with a frontend example used for the V1720 or V1740. The 
modifications for the VX1721 shouldn't be too hard as most of the CAEN digitizers 
are fortunately based on a similar configuration mechanism.
If you have a Midas CAMAC frontend, the trick would be to replace the CAMAC calls by 
the appropriate CAENComm_xxx() for the equivalent functionality.

Can you remind me what hardware do you have in your lab for acquisition?
CAMAC controller, VME controller etc.

Cheers, PAA

> Dear guys,
> 
> I’m Zhi Li from China, and I’m now working on my graduation project, which now
> basically gets stuck in the part of preparing the frontend for my FADC (CAEN
> VX1721) using Midas.
> 
> Now the current set-up includes a VME crate, a CAEN v2718 (Optical Bridge and
> Controller) and a CAEN VX1721(8ch 8bit 500MS/s Waveform digitizer). The hardware
> set-up has been finished and I could capture the analog waveform using CAEN
> software(wavedump). 
> 
> Could anyone please tell me what are the basic things to do for using MIDAS?
> I’ve installed MIDAS in PC and it works well for CAMAC, but do I need any extra
> hardware module on using VME crate? Also, how to download
> Universe-II VME driver?
> 
> Thanks,
> Li
       Reply  17 Mar 2014, Zhi Li, Forum, [need help] simple example frontend for CAEN VX1721  
Hi Pierre,

Thanks for your instructions. Before I run the wavedump software, I need to load a driver file for A2818, thus I think I've got this interface of A2818.

I would be grateful to have a look at the frontend example used for v1720 (closer to v1721 I suppose), would you be so kind to offer me the Makefile as well? I
really want to have a compilable/executable DAQ frontend for vme modules, and know better how to link to CAEN library in the Makefile.

About hardware currently used in the vme crate(A2818), there is a VME controller(V2718, CONET VME Bridge), and a FADC(VX1721 waveform digitizer). I'm now preparing
this DAQ system to compare relative quantum efficiency, timing resolution, 1 pe distribution of photomultipliers, also measure decay time of cosmic muons, and
electron spectrum. Humbly, I want to know your opinion on whether I need additional hardware to finish these experiments.

Thanks,
Li

> Hi Li,
> 
> You mention that you've got the wavedump working. It suggests that you have a A3818 
> interface, can you confirm that?
> 
> If so, you can make a Midas frontend using the CAEN libraries to access your VX1721. I can provide you with a frontend example used for the V1720 or V1740. The 
> modifications for the VX1721 shouldn't be too hard as most of the CAEN digitizers 
> are fortunately based on a similar configuration mechanism.
> If you have a Midas CAMAC frontend, the trick would be to replace the CAMAC calls by 
> the appropriate CAENComm_xxx() for the equivalent functionality.
> 
> Can you remind me what hardware do you have in your lab for acquisition?
> CAMAC controller, VME controller etc.
> 
> Cheers, PAA
> 
> > Dear guys,
> > 
> > I’m Zhi Li from China, and I’m now working on my graduation project, which now
> > basically gets stuck in the part of preparing the frontend for my FADC (CAEN
> > VX1721) using Midas.
> > 
> > Now the current set-up includes a VME crate, a CAEN v2718 (Optical Bridge and
> > Controller) and a CAEN VX1721(8ch 8bit 500MS/s Waveform digitizer). The hardware
> > set-up has been finished and I could capture the analog waveform using CAEN
> > software(wavedump). 
> > 
> > Could anyone please tell me what are the basic things to do for using MIDAS?
> > I’ve installed MIDAS in PC and it works well for CAMAC, but do I need any extra
> > hardware module on using VME crate? Also, how to download
> > Universe-II VME driver?
> > 
> > Thanks,
> > Li
Entry  12 Mar 2014, Andreas Suter, Info, Windows support droped? 
In the old SVN midas world it was typically such that the Windows dll's and
exe's were ready to be used when checking out. I am not so sure this is the case
for the current version, since when I use the packed dll's  and exe's (e.g.
odbedit.exe) I get the warning that this is running midas 2.0.0 but the current
version (on the linux server) is 2.1.

What does this mean?

1) A little bug in the packed windows part, but up-to-date dll's and exe's?
2) The dll's and exe's are not bundled any more to up-to-date version?

If 2) is the case, I would like to get a hint how to build midas under Windows
(Windows 7), since we still have some few Windows clients.  
    Reply  14 Mar 2014, Konstantin Olchanski, Info, Windows support droped? 
> In the old SVN midas world it was typically such that the Windows dll's and
> exe's were ready to be used when checking out.

The Windows executables are no longer included in the midas git repository. Old versions are still available in 
the git repository - they got pulled in during conversion from svn.

One reason for removing them is that neither myself, nor Pierre, nor Stefan have ready access to a Windows 
development environment and we cannot keep Windows binaries up to date. Theoretically we can setup a 
Windows machine just for compiling MIDAS, but then there is a question of which Windows we should use and 
how much priority we should put into it. I do not think there is any demand for MIDAS on Windows at TRIUMF.

(Personally, I think Windows is no longer a viable platform for any business use - with Microsoft focusing on 
"experiences", "tiles", touch screens, portable devices, and other gimmicks - rather than on providing a solid OS 
to get work done)

> I am not so sure this is the case
> for the current version, since when I use the packed dll's  and exe's (e.g.
> odbedit.exe) I get the warning that this is running midas 2.0.0 but the current
> version (on the linux server) is 2.1. What does this mean?

You can ignore this message. Stefan incremented the MIDAS version when we migrated to git, but
there are no changes to the MIDAS RPC mechanism and we are still fully compatible with old versions,
at least in the MIDAS RPC and in the mserver.

So tools like odbedit.exe should still work okey when connecting from Windows to MIDAS running on Linux or 
MacOS.

But old frontend programs may cause some trouble because the ODB layout changed somewhat with new things 
added to /eq/xxx/common. Simplest is to try, if it works, it works.

> 1) A little bug in the packed windows part, but up-to-date dll's and exe's?
> 2) The dll's and exe's are not bundled any more to up-to-date version?

Case (2) is the case. Personally I do not have any capability to build Windows binaries. Same for Pierre and I think 
for Stefan.

> If 2) is the case, I would like to get a hint how to build midas under Windows
> (Windows 7), since we still have some few Windows clients.  

I do not think pre-built executables will ever return - the new way of things is to "cut-and-paste" the "git clone" 
command from a web page, type "make", and be done with it. If your OS does not have "git", "make" & etc, you 
should switch to a real OS.

On the MIDAS software side, we have no problem with supporting Windows - same as on any other platform, 
please try to build and run it, report any problems, fixes, patches and improvements - we will commit them into 
the midas repository.

K.O.
       Reply  17 Mar 2014, Stefan Ritt, Info, Windows support droped? 
> The Windows executables are no longer included in the midas git repository. Old versions are still available in 
> the git repository - they got pulled in during conversion from svn.
> 
> One reason for removing them is that neither myself, nor Pierre, nor Stefan have ready access to a Windows 
> development environment and we cannot keep Windows binaries up to date. Theoretically we can setup a 
> Windows machine just for compiling MIDAS, but then there is a question of which Windows we should use and 
> how much priority we should put into it. I do not think there is any demand for MIDAS on Windows at TRIUMF.

I double checked and can confirm that the executables in GIT are very old. So I tried to compile the current version for Windows. I found that I had to change lots 
of places (basically all the new files written by KO) to make it work again, so it took me half a day, but now should be fine.

I'm not sure if it's a good idea to keep .exe files in GIT, maybe we should remove it some day, but for the moment I updated the executables to the current 
version. Feedback welcome.

/Stefan
Entry  11 Mar 2014, Andreas Suter, Forum, mlogger problem 
I stumbled over a problem which I cannot pin point and would appreciate suggestions.

I set up an experiment, and all of a sudden I noticed the following behaviour.

I can start any number of frontends without any problems as long as mlogger is NOT running.
I can also start mlogger without any problems. However, as soon as I started the mlogger, I cannot start anything else any more (including odbedit). I get the following assertion:
16:07:06 [Logger,INFO] Program Logger on host lem00 started
[local:nemu:S]/>q
[nemu@lem00 2014]$ odbedit -e nemu
odbedit: src/odb.c:753: db_update_open_record: Assertion `xkey->notify_count == pkey->notify_count' failed.
Aborted
This is even happening if I stop all frontends, start only the mlogger and afterwards try to start odbedit.

I tried to see if this is a generic feature on a test experiment, but there I cannot reproduce it. It seems that there is either something wrong with the ODB, something wrong with hotlinks, ..., I don't know.

I would appreciated suggestions how pin point the issue.
    Reply  11 Mar 2014, Stefan Ritt, Forum, mlogger problem 

Andreas Suter wrote:
I stumbled over a problem which I cannot pin point and would appreciate suggestions.

I set up an experiment, and all of a sudden I noticed the following behaviour.

I can start any number of frontends without any problems as long as mlogger is NOT running.
I can also start mlogger without any problems. However, as soon as I started the mlogger, I cannot start anything else any more (including odbedit). I get the following assertion:
16:07:06 [Logger,INFO] Program Logger on host lem00 started
[local:nemu:S]/>q
[nemu@lem00 2014]$ odbedit -e nemu
odbedit: src/odb.c:753: db_update_open_record: Assertion `xkey->notify_count == pkey->notify_count' failed.
Aborted
This is even happening if I stop all frontends, start only the mlogger and afterwards try to start odbedit.

I tried to see if this is a generic feature on a test experiment, but there I cannot reproduce it. It seems that there is either something wrong with the ODB, something wrong with hotlinks, ..., I don't know.

I would appreciated suggestions how pin point the issue.


K.O. put that in: https://bitbucket.org/tmidas/midas/commits/9d7b7c83b275a2bd3c846c4f265ff7f5d53f3426

He should have a look at it.

Have you tried to rebuild your ODB from scratch? (Save in XML, then delete .ODB.SHM, then load again form XML)?

/Stefan
       Reply  11 Mar 2014, Andreas Suter, Forum, mlogger problem 

Stefan Ritt wrote:

Andreas Suter wrote:
I stumbled over a problem which I cannot pin point and would appreciate suggestions.

I set up an experiment, and all of a sudden I noticed the following behaviour.

I can start any number of frontends without any problems as long as mlogger is NOT running.
I can also start mlogger without any problems. However, as soon as I started the mlogger, I cannot start anything else any more (including odbedit). I get the following assertion:
16:07:06 [Logger,INFO] Program Logger on host lem00 started
[local:nemu:S]/>q
[nemu@lem00 2014]$ odbedit -e nemu
odbedit: src/odb.c:753: db_update_open_record: Assertion `xkey->notify_count == pkey->notify_count' failed.
Aborted
This is even happening if I stop all frontends, start only the mlogger and afterwards try to start odbedit.

I tried to see if this is a generic feature on a test experiment, but there I cannot reproduce it. It seems that there is either something wrong with the ODB, something wrong with hotlinks, ..., I don't know.

I would appreciated suggestions how pin point the issue.


K.O. put that in: https://bitbucket.org/tmidas/midas/commits/9d7b7c83b275a2bd3c846c4f265ff7f5d53f3426

He should have a look at it.

Have you tried to rebuild your ODB from scratch? (Save in XML, then delete .ODB.SHM, then load again form XML)?

/Stefan

Yes, I could recover the ODB by falling back to a previous dump. Still, I would like to know what is the exact meaning of the above assertion. It might help to understand what are the likely cause which results in the assertion.

/Andreas
    Reply  14 Mar 2014, Konstantin Olchanski, Forum, mlogger problem 
> I stumbled over a problem which I cannot pin point and would appreciate suggestions.
> 
> [nemu@lem00 2014]$ odbedit -e nemu
> odbedit: src/odb.c:753: db_update_open_record: Assertion `xkey->notify_count == pkey-
>notify_count' failed.
> Aborted

I think this is a real bug in MIDAS - I will have to take a look to figure out where this is coming from. At the 
least, if I cannot replace the assert with some corrective action, I may replace it with an error message.

I am glad you could recover by reloading odb.

K.O.
ELOG V3.1.4-2e1708b5