Back Midas Rome Roody Rootana
  Midas DAQ System, Page 78 of 143  Not logged in ELOG logo
ID Date Author Topicdown Subject
  2050   09 Dec 2020 Stefan RittForumhistory and variables confusion
First, the writing of banks is completely independent of the history system. Banks go to the log file only, 
while the history is only linked to the "Variables" section in the ODB.

Second, it's advisable to group similar equipment into one. Like if you have five power supplies powering
and experiment, you don't want to have five equipments Supply1, Supply2, ..., but only one equipment
"Power Supplies". In the frontend belonging to that equipment, you define a DEVICE_DRIVER list with
one entry for each power supply. If you interact with an mscb device, there are some helper functions
which simplify the definition of the equipment and which I can send you privately. So your device
driver looks a bit like the one attached.

If you cannot do that and absolutely want separate equipments, please post a complete ODB subtree of your
settings, and I can try to reproduce your problem.

Stefan

======================

DEVICE_DRIVER power_driver[] = {
   {"Power Supply 1", mscbdev, 0, NULL, DF_INPUT | DF_MULTITHREAD},
   {"Power Supply 2", mscbdev, 0, NULL, DF_INPUT | DF_MULTITHREAD},
   {"Power Supply 3", mscbdev, 0, NULL, DF_INPUT | DF_MULTITHREAD},
   {""}
};

...

INT frontend_init()
{
   mscb_define("mscbxxx.psi.ch", "Power Supplies", "Power Supply 1", power_driver, 1, 0, "Output 1", 0.1);
   mscb_define("mscbxxx.psi.ch", "Power Supplies", "Power Supply 1", power_driver, 1, 1, "Output 2", 0.1);
   ...
}

/*-- Function to define MSCB variables in a convenient way ---------*/

void mscb_define(const char *submaster, const char *equipment, const char *devname, 
                 DEVICE_DRIVER *driver, int address, unsigned char var_index, 
                 const char *name, double threshold)
{
   int i, dev_index, chn_index, chn_total;
   char str[256];
   float f_threshold;
   HNDLE hDB;

   cm_get_experiment_database(&hDB, NULL);

   if (submaster && submaster[0]) {
      sprintf(str, "/Equipment/%s/Settings/Devices/%s/Device", equipment, devname);
      db_set_value(hDB, 0, str, submaster, 32, 1, TID_STRING);
      sprintf(str, "/Equipment/%s/Settings/Devices/%s/Pwd", equipment, devname);
      db_set_value(hDB, 0, str, "meg", 32, 1, TID_STRING);
   }

   /* find device in device driver */
   for (dev_index=0 ; driver[dev_index].name[0] ; dev_index++)
      if (equal_ustring(driver[dev_index].name, devname))
         break;

   if (!driver[dev_index].name[0]) {
      cm_msg(MERROR, "mscb_define", "Device \"%s\" not present in device driver list", devname);
      return;
   }

   /* count total number of channels */
   for (i=chn_total=0 ; i<=dev_index ; i++)
      if (((driver[dev_index].flags & DF_INPUT) > 0 && (driver[i].flags & DF_INPUT)) ||
          ((driver[dev_index].flags & DF_OUTPUT) > 0 && (driver[i].flags & DF_OUTPUT)))
      chn_total += driver[i].channels;

   chn_index = driver[dev_index].channels;
   sprintf(str, "/Equipment/%s/Settings/Devices/%s/MSCB Address", equipment, devname);
   db_set_value_index(hDB, 0, str, &address, sizeof(int), chn_index, TID_INT, TRUE);
   sprintf(str, "/Equipment/%s/Settings/Devices/%s/MSCB Index", equipment, devname);
   db_set_value_index(hDB, 0, str, &var_index, sizeof(char), chn_index, TID_BYTE, TRUE);

   if (threshold != -1 && (driver[dev_index].flags & DF_INPUT) > 0) {
     sprintf(str, "/Equipment/%s/Settings/Update Threshold", equipment);
     f_threshold = (float) threshold;
     db_set_value_index(hDB, 0, str, &f_threshold, sizeof(float), chn_total, TID_FLOAT, TRUE);
   }

   if (name && name[0]) {
      sprintf(str, "/Equipment/%s/Settings/Names %s", equipment, devname);
      db_set_value_index(hDB, 0, str, name, 32, chn_total, TID_STRING, TRUE);
   }

   /* increment number of channels for this driver */
   driver[dev_index].channels++;
}
  2051   10 Dec 2020 Frederik WautersForumhistory and variables confusion
I wanted to have a c++ style driver, e.g. a instance of a "PowerSupply" class. This was not compatible with the list of DEVICE_DRIVER structs, with needs a C function entry point with variable arguments. 

Anyways, I attach my odb. I believe the issue stands regardless of the specific design choice here. Setting the History Log flag copies the banks created to the "Variables" of every equipments initialized, leading to a mismatch between the names array, and the variables. Can be solved by not using FE history events, but Virtual, but the flag in the Equipment is confusing.  

Bank creation in readout function:

for(const auto& d: drivers)
{
...
...
  bk_create(pevent,bk_name, TID_FLOAT, (void **)&pdata);
...			
  std::vector<float> voltage = d->GetVoltage();
  std::vector<float> current = d->GetCurrent();
  for(channels)
  {
    *pdata++ = voltage.at(iChannel);
    *pdata++ = current.at(iChannel);
  }
  bk_close(pevent, pdata);
}
Attachment 1: genesys.odb
[/Equipment/Genesys/Statistics]
Events sent = DOUBLE : 2
Events per sec. = DOUBLE : 0.2583311805734952
kBytes per sec. = DOUBLE : 0.0268664427796435

[/Equipment/Genesys/Settings]
IP = STRING : [13] 192.168.0.24
NChannels = INT32 : 2
port = INT32 : 8003
Global Reset On FE Start = BOOL : n
Address = INT32[2] :
[0] 0
[1] 1
Identification Code = STRING[2] :
[64] TDK-LAMBDA,GH30-50,114B137-0007,G:02.109
[64] TDK-LAMBDA,GH30-50,114B137-0004,G:02.109
Names = STRING[2] :
[32] ch1
[32] ch2
Blink = BOOL[2] :
[0] n
[1] n
Enable = BOOL : n
reply timout = INT32 : 50
min reply = INT32 : 3
Read ESR = BOOL : n
ESR = INT32[2] :
[0] 0
[1] 0

[/Equipment/Genesys/Variables]
State = BOOL[2] :
[0] y
[1] n
Set State = BOOL[2] :
[0] y
[1] n
Voltage = FLOAT[2] :
[0] 1.997
[1] 0
Demand Voltage = FLOAT[2] :
[0] 2
[1] 0
Current = FLOAT[2] :
[0] 2.03
[1] 0
Current Limit = FLOAT[2] :
[0] 16
[1] 0
LG 0 = FLOAT[4] :
[0] 1.997
[1] 2.03
[2] 0
[3] 0
LH 0 = FLOAT[8] :
[0] 1
[1] 0.0005
[2] 0
[3] 0
[4] 0
[5] 0
[6] 0
[7] 0

[/Equipment/Genesys/Common]
Event ID = UINT16 : 121
Trigger mask = UINT16 : 0
Buffer = STRING : [32] SYSTEM
Type = INT32 : 1
Source = INT32 : 0
Format = STRING : [8] MIDAS
Enabled = BOOL : y
Read on = INT32 : 35
Period = INT32 : 10000
Event limit = DOUBLE : 0
Num subevents = UINT32 : 0
Log history = INT32 : 1
Frontend host = STRING : [32] localhost
Frontend name = STRING : [32] Power Frontend
Frontend file name = STRING : [256] /home/labor/online/backend_pc/midas_fe/power_control/power.cpp
Status = STRING : [256] Ok
Status color = STRING : [32] greenLight
Hidden = BOOL : n
Write cache size = INT32 : 100000

  2052   11 Dec 2020 Frederik WautersForumhistory and variables confusion
1. ok, so calling the same readout functions from different equipments is just a bad idea, my bad, no blame for Midas to write data from both bank to both odb trees ...

2. One needs the same amount of bank entries as the size of settings/names[] . Otherwise the "History Log" flag does not work. So just don`t us "names" but "channel names" or something. 

" Second, it's advisable to group similar equipment into one. Like if you have five power supplies powering
and experiment, you don't want to have five equipments Supply1, Supply2, ..., but only one equipment
"Power Supplies".  "

It would be nice if this also works with c++ style drivers, i.e. a instance of a class. I don`t now how one would give an entry point to the "DEVICE_DRIVER" struct then.



> I wanted to have a c++ style driver, e.g. a instance of a "PowerSupply" class. This was not compatible with the list of DEVICE_DRIVER structs, with needs a C function entry point with variable arguments. 
> 
> Anyways, I attach my odb. I believe the issue stands regardless of the specific design choice here. Setting the History Log flag copies the banks created to the "Variables" of every equipments initialized, leading to a mismatch between the names array, and the variables. Can be solved by not using FE history events, but Virtual, but the flag in the Equipment is confusing.  
> 
> Bank creation in readout function:
> 
> for(const auto& d: drivers)
> {
> ...
> ...
>   bk_create(pevent,bk_name, TID_FLOAT, (void **)&pdata);
> ...			
>   std::vector<float> voltage = d->GetVoltage();
>   std::vector<float> current = d->GetCurrent();
>   for(channels)
>   {
>     *pdata++ = voltage.at(iChannel);
>     *pdata++ = current.at(iChannel);
>   }
>   bk_close(pevent, pdata);
> }
  2053   15 Dec 2020 Konstantin OlchanskiForumhistory and variables confusion
I think you are facing several problems:

a) mlogger does not clearly explain what history names will be used for which entries
in /eq/xxx/variables. "mlogger -v" almost does it, but we also need
"mlogger -v -n" to "show what you will do, but do not do it yet".

b) the mfe.c and the device class driver structure is very dated, tries to "do c++ in c". If it works for you,
certainly use it, but if it confuses you (as it confuses me), it probably only takes a few lines of c++
to replace the whole thing (minus the actual device drivers, which is the meat of it).

It think today, you face a choice:
- invest some time to understand the old device driver framework
- invest some time to do it "by hand" in c++, write your own device drivers (or use 3rd party drivers or snarf the "c" drivers from midas). Use the TMFE C++ frontend class if you go this route.

I would estimate that both choices are about the same amount of work.

K.O.


> 1. ok, so calling the same readout functions from different equipments is just a bad idea, my bad, no blame for Midas to write data from both bank to both odb trees ...
> 
> 2. One needs the same amount of bank entries as the size of settings/names[] . Otherwise the "History Log" flag does not work. So just don`t us "names" but "channel names" or something. 
> 
> " Second, it's advisable to group similar equipment into one. Like if you have five power supplies powering
> and experiment, you don't want to have five equipments Supply1, Supply2, ..., but only one equipment
> "Power Supplies".  "
> 
> It would be nice if this also works with c++ style drivers, i.e. a instance of a class. I don`t now how one would give an entry point to the "DEVICE_DRIVER" struct then.
> 
> 
> 
> > I wanted to have a c++ style driver, e.g. a instance of a "PowerSupply" class. This was not compatible with the list of DEVICE_DRIVER structs, with needs a C function entry point with variable arguments. 
> > 
> > Anyways, I attach my odb. I believe the issue stands regardless of the specific design choice here. Setting the History Log flag copies the banks created to the "Variables" of every equipments initialized, leading to a mismatch between the names array, and the variables. Can be solved by not using FE history events, but Virtual, but the flag in the Equipment is confusing.  
> > 
> > Bank creation in readout function:
> > 
> > for(const auto& d: drivers)
> > {
> > ...
> > ...
> >   bk_create(pevent,bk_name, TID_FLOAT, (void **)&pdata);
> > ...			
> >   std::vector<float> voltage = d->GetVoltage();
> >   std::vector<float> current = d->GetCurrent();
> >   for(channels)
> >   {
> >     *pdata++ = voltage.at(iChannel);
> >     *pdata++ = current.at(iChannel);
> >   }
> >   bk_close(pevent, pdata);
> > }
  2054   16 Dec 2020 Isaac Labrie BoulayForumIssues building banks.
Hi all,

I'm currently trying to build events through doing block transfers. The worry was 
that organizing and packaging bank data into an array would produce too much dead 
time causing too many missed events. Trying out that method, I'm running into all 
sorts of issues such as unaligned transfers where the QDC events are unaligned, or 
improperly aligned banks. Giving me a headache.

My question is, if I were to revert back to simple 32 bit read cycles and using 
the fevme.cxx template's method of organizing data before sending them to the 
buffer, what kind of deadtime should I expect? Am I wrong to assume that this 
would result in deadtime at all? I'm using a CAEN V792n 16 channel QDC and the hit 
frequency that I'm using to test is 20kHz.

Thanks.

Isaac
  2055   16 Dec 2020 Konstantin OlchanskiForumIssues building banks.
> I'm currently trying to build events through doing block transfers.

I am confused by your question. I assume you read a CAEN V792 ADC, but I do not know what VME master you 
use. The restrictions on data alignment come from the VME master.
I am mostly familiar with restrictions of UniverseII and tsi148 PCI-VME bridges.
I think there is no restriction for USB-VME bridges and similar.

Anyhow. Which block transfer do you use? 32-bit block transfer (BLT32)? 64-bit block transfer (MBLT64)? 
(no 128-bit 2eVME/2eSST transfers from the V792). Maybe the "simulated block transfer" (DMA engine uses 
single-word reads instead of block transfer)?

> The worry was that organizing and packaging bank data into an array would produce too much dead time 
causing too many missed events.

Valid concerns.

> I'm running into all sorts of issues such as unaligned transfers where the QDC events are unaligned, or 
improperly aligned banks.

You should not see any problems with unaligned transfers if you give the DMA engine
correct memory addresses as required by the hardware:

- always aligned to 32-bit (4 bytes, last two address bits set to 0)
- aligned to 64-bits for MBLT64 64-bit transfers, this would be the normal case for the V792 (8 bytes, 
last 3 address bits set to 0)
- aligned to 128-bits for 2eVME/2eSST transfers (16 bytes, last 4 bits of address are zero).

You also need to specify correct amount of data to read: number of bytes should be multiple of 4 for 32-
bit transfers, multiple to 8 for 64-bit transfers and multiple of 16 for 128-bit transfers (2eVME/2eSST).

Very often this requires reading "extra" data words. Most VME modules can generate extra pad words to 
align event length to DMA restrictions. Sometimes you need to
enable this in a control register (V792, V1190).

> Giving me a headache.

Me too. MIDAS recently introduced the QWORD 64-bit data type, banks of this type
should have correct alignment for 64-bit VME block transfers. But for 2eVME/2eSST
transfers, I still have to ensure alignment "by hand" (SIS3820, VF48, etc).

With QWORD banks, you need to use bk_init32a() instead of bk_init32().

> My question is, if I were to revert back to simple 32 bit read cycles

Yes, I always test with single-word reads first, with the 32-bit block transfer second and try the 64-bit 
block transfer last.

Sometimes there are unrelated problems (with the VME modules, VME bus, etc, or
with bugs in the frontend, etc) and this approach helps to identify the source
of trouble.

> and using 
> the fevme.cxx template's method of organizing data before sending them to the 
> buffer, what kind of deadtime should I expect? Am I wrong to assume that this 
> would result in deadtime at all? I'm using a CAEN V792n 16 channel QDC and the hit 
> frequency that I'm using to test is 20kHz.

Yes, with asynchronous read using 64-bit block transfer, 20 kHz should be achievable.

The old fevme frontend is based on the mfe.c framework and implementing
async readout requires special contortions. The structure of the new TMFE C++ frontend
class is supposed to make it easier, but I do not have an example TMFE based fevme yet.

P.S. Without using block transfer, your max rate is limited to:

16 channels, 1 word per channel, plus 1 header and 1 footer = 18 words (by luck, 64-bit aligned for 
correct BLT64 block read).

using VME single-word read at 1 us per transfer, 18 us per event = 55 kHz repetition rate.

(you do not say if you have any other VME modules you have to read)

K.O.
  2056   16 Dec 2020 Isaac Labrie BoulayForumIssues building banks.
Thanks for the quick reply,

> > I'm currently trying to build events through doing block transfers.
> 
> I am confused by your question. I assume you read a CAEN V792 ADC, but I do not know what VME master you 
> use. The restrictions on data alignment come from the VME master.
> I am mostly familiar with restrictions of UniverseII and tsi148 PCI-VME bridges.
> I think there is no restriction for USB-VME bridges and similar.
> 
> Anyhow. Which block transfer do you use? 32-bit block transfer (BLT32)? 64-bit block transfer (MBLT64)? 
> (no 128-bit 2eVME/2eSST transfers from the V792). Maybe the "simulated block transfer" (DMA engine uses 
> single-word reads instead of block transfer)?

I read a single CAEN V792n QDC, 18 words, and a single CAEN V1190 TDC, 2 channels so 8 words. When I poll, I 
read on every poll_event() and read whatever data is in whatever module (TDC_dataready || QDC_dataready). The 
VME master that I'm using to talk to the modules is a CAEN V1718. I am trying to read data by BLT32. Sorry for 
the confusing question (Can you tell I'm an intern?).

> > The worry was that organizing and packaging bank data into an array would produce too much dead time 
> causing too many missed events.
> 
> Valid concerns.
> 
> > I'm running into all sorts of issues such as unaligned transfers where the QDC events are unaligned, or 
> improperly aligned banks.
> 
> You should not see any problems with unaligned transfers if you give the DMA engine
> correct memory addresses as required by the hardware:
> 
> - always aligned to 32-bit (4 bytes, last two address bits set to 0)
> - aligned to 64-bits for MBLT64 64-bit transfers, this would be the normal case for the V792 (8 bytes, 
> last 3 address bits set to 0)
> - aligned to 128-bits for 2eVME/2eSST transfers (16 bytes, last 4 bits of address are zero).
> 
> You also need to specify correct amount of data to read: number of bytes should be multiple of 4 for 32-
> bit transfers, multiple to 8 for 64-bit transfers and multiple of 16 for 128-bit transfers (2eVME/2eSST).

I am transferring 32-bit words. Transferring 32-bit words should always read multiples of 4 bytes so that's 
good.

> Very often this requires reading "extra" data words. Most VME modules can generate extra pad words to 
> align event length to DMA restrictions. Sometimes you need to
> enable this in a control register (V792, V1190).
> 
> > Giving me a headache.
> 
> Me too. MIDAS recently introduced the QWORD 64-bit data type, banks of this type
> should have correct alignment for 64-bit VME block transfers. But for 2eVME/2eSST
> transfers, I still have to ensure alignment "by hand" (SIS3820, VF48, etc).
> 
> With QWORD banks, you need to use bk_init32a() instead of bk_init32().
> 
> > My question is, if I were to revert back to simple 32 bit read cycles
> 
> Yes, I always test with single-word reads first, with the 32-bit block transfer second and try the 64-bit 
> block transfer last.
> 
> Sometimes there are unrelated problems (with the VME modules, VME bus, etc, or
> with bugs in the frontend, etc) and this approach helps to identify the source
> of trouble.
> 
> > and using 
> > the fevme.cxx template's method of organizing data before sending them to the 
> > buffer, what kind of deadtime should I expect? Am I wrong to assume that this 
> > would result in deadtime at all? I'm using a CAEN V792n 16 channel QDC and the hit 
> > frequency that I'm using to test is 20kHz.
> 
> Yes, with asynchronous read using 64-bit block transfer, 20 kHz should be achievable.
> 
> The old fevme frontend is based on the mfe.c framework and implementing
> async readout requires special contortions. The structure of the new TMFE C++ frontend
> class is supposed to make it easier, but I do not have an example TMFE based fevme yet.
> 
> P.S. Without using block transfer, your max rate is limited to:
> 
> 16 channels, 1 word per channel, plus 1 header and 1 footer = 18 words (by luck, 64-bit aligned for 
> correct BLT64 block read).
> 
> using VME single-word read at 1 us per transfer, 18 us per event = 55 kHz repetition rate.
> 
> (you do not say if you have any other VME modules you have to read)
> 

Okay so transferring 18 + 6 words should give me close to 40kHz repetition rate. That's good news. I will just 
stick to 1 word transfers.

The way that transfers are done in the fevme.cxx requires iterating through 16 word arrays a number of time (3 
times I believe if you include the iterations taking place in v792_EventRead()). Does that not pose a 
significant deadtime concern? 

> K.O.

Thanks again for taking the time to help me out!

Cheers.

Isaac
  2057   16 Dec 2020 Konstantin OlchanskiForumIssues building banks.
> > > I'm currently trying to build events through doing block transfers.
> > 
> > I am confused by your question. I assume you read a CAEN V792 ADC, but I do not know what VME master you 
> > use. The restrictions on data alignment come from the VME master.
> > I am mostly familiar with restrictions of UniverseII and tsi148 PCI-VME bridges.
> > I think there is no restriction for USB-VME bridges and similar.
> > 
> > Anyhow. Which block transfer do you use? 32-bit block transfer (BLT32)? 64-bit block transfer (MBLT64)? 
> > (no 128-bit 2eVME/2eSST transfers from the V792). Maybe the "simulated block transfer" (DMA engine uses 
> > single-word reads instead of block transfer)?
> 
> I read a single CAEN V792n QDC, 18 words, and a single CAEN V1190 TDC, 2 channels so 8 words. When I poll, I 
> read on every poll_event() and read whatever data is in whatever module (TDC_dataready || QDC_dataready). The 
> VME master that I'm using to talk to the modules is a CAEN V1718. I am trying to read data by BLT32. Sorry for 
> the confusing question (Can you tell I'm an intern?).
> 

Ok, I see. Using the normal mfe.c structure, you will not be able to read the VME modules
at maximum speed. This is because you must have two concurrent activities happening at the same time:

(1) tell the VME bridge to read data,
(2) package this data into midas banks and events and write it to the MIDAS event buffer.

If you do these tasks sequentially, obviously the VME bus will be idle during step (2),
and unless (2) takes 0 seconds (it does not) you will have a slow down.

So for maximum data rate, I prefer to have 3 threads:

thread 1: run the VME transfers, store data in circular buffer (today it would be std::deque<std::vector<char>>)
thread 2: encode the data into midas banks and midas events, store completed events in a circular buffer 
(std::deque<EVENT_HEADER*>).
thread 3: write data to midas event buffer (call bm_send_event(), etc)

This is very hard to do using the mfe.c frontend. (the main reason I wrote the TMFE C++ frontend class).

>
> Okay so transferring 18 + 6 words should give me close to 40kHz repetition rate. That's good news. I will just 
> stick to 1 word transfers.
>

I do not know the timing of CAEN V1718 single-word transfers. It may be significantly longer than 1 us:

V7865: DWORD read - CPU - PCI bus - tsi148 - VME
V1718: encode request as USB packet - CPU - PCI bus - USB hub - USB bus - USB asic - FPGA - VME (on the way back, 
"extract data from USB packet")

> 
> The way that transfers are done in the fevme.cxx requires iterating through 16 word arrays a number of time (3 
> times I believe if you include the iterations taking place in v792_EventRead()). Does that not pose a 
> significant deadtime concern? 
> 

Hmm... I am not sure what fevme you refer to. I guess I can find version of fevme.cxx where data is read at
maximum VME speed if you want it.

K.O.
  2058   16 Dec 2020 Isaac Labrie BoulayForumIssues building banks.
> > > > I'm currently trying to build events through doing block transfers.
> > > 
> > > I am confused by your question. I assume you read a CAEN V792 ADC, but I do not know what VME master you 
> > > use. The restrictions on data alignment come from the VME master.
> > > I am mostly familiar with restrictions of UniverseII and tsi148 PCI-VME bridges.
> > > I think there is no restriction for USB-VME bridges and similar.
> > > 
> > > Anyhow. Which block transfer do you use? 32-bit block transfer (BLT32)? 64-bit block transfer (MBLT64)? 
> > > (no 128-bit 2eVME/2eSST transfers from the V792). Maybe the "simulated block transfer" (DMA engine uses 
> > > single-word reads instead of block transfer)?
> > 
> > I read a single CAEN V792n QDC, 18 words, and a single CAEN V1190 TDC, 2 channels so 8 words. When I poll, I 
> > read on every poll_event() and read whatever data is in whatever module (TDC_dataready || QDC_dataready). The 
> > VME master that I'm using to talk to the modules is a CAEN V1718. I am trying to read data by BLT32. Sorry for 
> > the confusing question (Can you tell I'm an intern?).
> > 
> 
> Ok, I see. Using the normal mfe.c structure, you will not be able to read the VME modules
> at maximum speed. This is because you must have two concurrent activities happening at the same time:
> 

I am using the mfe.cxx backend thread, I'm guessing that this is the file you are referring to.

> (1) tell the VME bridge to read data,
> (2) package this data into midas banks and events and write it to the MIDAS event buffer.
> 
> If you do these tasks sequentially, obviously the VME bus will be idle during step (2),
> and unless (2) takes 0 seconds (it does not) you will have a slow down.
> 

I see.


> So for maximum data rate, I prefer to have 3 threads:
> 
> thread 1: run the VME transfers, store data in circular buffer (today it would be std::deque<std::vector<char>>)
> thread 2: encode the data into midas banks and midas events, store completed events in a circular buffer 
> (std::deque<EVENT_HEADER*>).
> thread 3: write data to midas event buffer (call bm_send_event(), etc)
> 
> This is very hard to do using the mfe.c frontend. (the main reason I wrote the TMFE C++ frontend class).

Yes it seems like a bit of work

> >
> > Okay so transferring 18 + 6 words should give me close to 40kHz repetition rate. That's good news. I will just 
> > stick to 1 word transfers.
> >
> 
> I do not know the timing of CAEN V1718 single-word transfers. It may be significantly longer than 1 us:
> 
> V7865: DWORD read - CPU - PCI bus - tsi148 - VME
> V1718: encode request as USB packet - CPU - PCI bus - USB hub - USB bus - USB asic - FPGA - VME (on the way back, 
> "extract data from USB packet")

I found the following information in the CAEN V1718 manual:

"Transfer Rate = ~30MByte/s. Transfer rate supported in MBLT read cycles (block size = 32 kb), using a PC host with 
Windows XP or Linux and High Speed USB"

I'm guessing the sentence simply means that the rate increases with multiplexed block transfers. If the transfer rate 
is 30MBytes/s I should be able to write words at a transfer rate of 7500000 words per second.

> 
> > 
> > The way that transfers are done in the fevme.cxx requires iterating through 16 word arrays a number of time (3 
> > times I believe if you include the iterations taking place in v792_EventRead()). Does that not pose a 
> > significant deadtime concern? 
> > 
> 
> Hmm... I am not sure what fevme you refer to. I guess I can find version of fevme.cxx where data is read at
> maximum VME speed if you want it.

This is the VME C++ frontend example in the directory /midas/examples/Triumf/c++/

If you can find a faster version of this code I would definitely like to check it out!

> 
> K.O.


Thanks again.

Isaac
  2059   16 Dec 2020 Stefan RittForumIssues building banks.
> This is very hard to do using the mfe.c frontend. (the main reason I wrote the TMFE C++ frontend class).

Actually that's not true. Just look at 

midas/examples/mtfe/mtfe.c

this is an example for a frontend with equipment with the EQ_USER flag, which allows you easily to run a separate 
thread (or more) for event collection and processing. Of course all old-fashioned C style (code is from 2007) but it 
works.

Stefan
  2060   16 Dec 2020 Isaac Labrie BoulayForumIssues building banks.
> > This is very hard to do using the mfe.c frontend. (the main reason I wrote the TMFE C++ frontend class).
> 
> Actually that's not true. Just look at 
> 
> midas/examples/mtfe/mtfe.c
> 
> this is an example for a frontend with equipment with the EQ_USER flag, which allows you easily to run a separate 
> thread (or more) for event collection and processing. Of course all old-fashioned C style (code is from 2007) but it 
> works.
> 
> Stefan

Thank you sir I'll give it a look.

Cheers

Isaac
  2070   08 Jan 2021 Stefan RittForumhistory and variables confusion
We kind of agreed to rewrite the slow control system in C++. Each device will have its own driver derived from a common base class implementing the general communication. The reason we need a "system" and not only a "hand-written" driver is because we want:

- glue many device drivers together for a single equipment
- have a dedicated readout thread for every device, in order not to block other devices
- have a common error reporting scheme working with several threads
- being able to disable/enable individual devices without changing the history system each time
- having a common naming scheme for all devices (like "enforce" /Equipment/<name>/Settings/Names xxx) which is needed by the history system
- ...

Will see when we have time for that.

Stefan
  2071   13 Jan 2021 Isaac Labrie BoulayForumpoll_event() is very slow.
Hi all,

I'm currently trying to see if I can speed up polling in a frontend I'm testing. 
Currently it seems like I can't get 'lam's to happen faster than 120 times/second. 
There must be a way to make this faster. From what I understand, changing the poll 
time (500ms by default) won't affect the frequency of polling just the 'lam' 
period.

Any suggestions?

Thanks for your help!

Isaac

Hi,

What is the actual readout time, event size?
Do you have multiple equipment and of what type if any?

PAA
  2072   13 Jan 2021 Konstantin OlchanskiForumpoll_event() is very slow.
> 
> I'm currently trying to see if I can speed up polling in a frontend I'm testing. 
> Currently it seems like I can't get 'lam's to happen faster than 120 times/second. 
> There must be a way to make this faster. From what I understand, changing the poll 
> time (500ms by default) won't affect the frequency of polling just the 'lam' 
> period.
> 
> Any suggestions?
> 

You could switch from the traditional midas mfe.c frontend to the C++ TMFE frontend,
where all this "lam" and "poll" business is removed.

At the moment, there are two example programs using the C++ TMFE frontend,
single threaded (progs/fetest_tmfe.cxx) and multithreaed (progs/fetest_tmfe_thread.cxx).

K.O.
  Draft   13 Jan 2021 Pierre-Andre AmaudruzForumpoll_event() is very slow.
> Hi all,
> 
> I'm currently trying to see if I can speed up polling in a frontend I'm testing. 
> Currently it seems like I can't get 'lam's to happen faster than 120 times/second. 
> There must be a way to make this faster. From what I understand, changing the poll 
> time (500ms by default) won't affect the frequency of polling just the 'lam' 
> period.
> 
> Any suggestions?
> 
> Thanks for your help!
> 
> Isaac

Hi,

How many equipment do you have and of what type?
What is the measured readout time of your equipment?

As you mentioned the polling time define the maximum time you spend in the in polling call before checking other equipment and system activities. But as soon as you get a LAM during the polling loop, the event is readout. The readout time of this equipment is obviously to be considered as well.

In case you have multiple equipment, the readout time of the other equipment is to be taken in account as you wont return to your polling prior the completion of them.
  2074   13 Jan 2021 Stefan RittForumpoll_event() is very slow.
Something must be wrong on your side. If you take the example frontend under

midas/examples/experiment/frontend.cxx

and let it run to produce dummy events, you get about 90 Hz. This is because we have a

  ss_sleep(10);

in the read_trigger_event() routine to throttle things down. If you remove that sleep, 
you get an event rate of about 500'000 Hz. So the framework is really quick.

Probably your routine which looks for a 'lam' takes really long and should be fixed.

Stefan
  2075   14 Jan 2021 Pintaudi GiorgioForumpoll_event() is very slow.
> Something must be wrong on your side. If you take the example frontend under
> 
> midas/examples/experiment/frontend.cxx
> 
> and let it run to produce dummy events, you get about 90 Hz. This is because we have a
> 
>   ss_sleep(10);
> 
> in the read_trigger_event() routine to throttle things down. If you remove that sleep, 
> you get an event rate of about 500'000 Hz. So the framework is really quick.
> 
> Probably your routine which looks for a 'lam' takes really long and should be fixed.
> 
> Stefan

Sorry if I am going off-topic but, because the ss_sleep function was mentioned here, I 
would like to take the chance and report an issue that I am having.

In all my slow control frontends, the CPU usage for each frontend is close to 100%. This 
means that each frontend is monopolizing a single core. When I did some profiling, I 
noticed that 99% of the time is spent inside the ss_sleep function. Now, I would expect 
that the ss_sleep function should not require any CPU usage at all or very little.

So my two questions are:
Is this a bug or a feature?
Would you able to check/reproduce this behavior or do you need additional info from my 
side?
  2076   14 Jan 2021 Isaac Labrie BoulayForumpoll_event() is very slow.
> Something must be wrong on your side. If you take the example frontend under
> 
> midas/examples/experiment/frontend.cxx
> 
> and let it run to produce dummy events, you get about 90 Hz. This is because we have a
> 
>   ss_sleep(10);
> 
> in the read_trigger_event() routine to throttle things down. If you remove that sleep, 
> you get an event rate of about 500'000 Hz. So the framework is really quick.
> 
> Probably your routine which looks for a 'lam' takes really long and should be fixed.
> 
> Stefan

Hi Stefan,

I should mention that I was using midas/examples/Triumf/c++/fevme.cxx. I was trying to see 
the max speed so I had the 'lam' always = 1 with nothing else to add overhead in the 
poll_event(). I was getting <200 Hz. I am assuming that this is a bug. There is no 
ss_sleep() in that function.

Thanks for your quick response!

Isaac
  2077   15 Jan 2021 Isaac Labrie BoulayForumpoll_event() is very slow.
> > 
> > I'm currently trying to see if I can speed up polling in a frontend I'm testing. 
> > Currently it seems like I can't get 'lam's to happen faster than 120 times/second. 
> > There must be a way to make this faster. From what I understand, changing the poll 
> > time (500ms by default) won't affect the frequency of polling just the 'lam' 
> > period.
> > 
> > Any suggestions?
> > 
> 
> You could switch from the traditional midas mfe.c frontend to the C++ TMFE frontend,
> where all this "lam" and "poll" business is removed.
> 
> At the moment, there are two example programs using the C++ TMFE frontend,
> single threaded (progs/fetest_tmfe.cxx) and multithreaed (progs/fetest_tmfe_thread.cxx).
> 
> K.O.

Ok. I did not know that there was a C++ OOD frontend example in MIDAS. I'll take a look at 
it. Is there any documentation on it works?

Thanks for the support!

Isaac
  2084   08 Feb 2021 Konstantin OlchanskiForumpoll_event() is very slow.
> I should mention that I was using midas/examples/Triumf/c++/fevme.cxx

this is correct, the fevme frontend is written to do 100% CPU-busy polling.

there is several reasons for this:
- on our VME processors, we have 2 core CPUs, 1st core can poll the VME bus, 2nd core can run 
mfe.c and the ethernet transmitter.
- interrupts are expensive to use (in latency and in cpu use) because kernel handler has to call 
use handler, return back etc
- sub-millisecond sleep used to be expensive and unreliable (on 1-2GHz "core 1" and "core 2" 
CPUs running SL6 and SL7 era linux). As I understand, current linux and current 3+GHz CPUs can 
do reliable microsecond sleep.

K.O.
ELOG V3.1.4-2e1708b5