Back Midas Rome Roody Rootana
  Midas DAQ System, Page 78 of 143  Not logged in ELOG logo
    Reply  10 Dec 2020, Frederik Wauters, Forum, history and variables confusion genesys.odb
I wanted to have a c++ style driver, e.g. a instance of a "PowerSupply" class. This was not compatible with the list of DEVICE_DRIVER structs, with needs a C function entry point with variable arguments. 

Anyways, I attach my odb. I believe the issue stands regardless of the specific design choice here. Setting the History Log flag copies the banks created to the "Variables" of every equipments initialized, leading to a mismatch between the names array, and the variables. Can be solved by not using FE history events, but Virtual, but the flag in the Equipment is confusing.  

Bank creation in readout function:

for(const auto& d: drivers)
{
...
...
  bk_create(pevent,bk_name, TID_FLOAT, (void **)&pdata);
...			
  std::vector<float> voltage = d->GetVoltage();
  std::vector<float> current = d->GetCurrent();
  for(channels)
  {
    *pdata++ = voltage.at(iChannel);
    *pdata++ = current.at(iChannel);
  }
  bk_close(pevent, pdata);
}
    Reply  11 Dec 2020, Frederik Wauters, Forum, history and variables confusion 
1. ok, so calling the same readout functions from different equipments is just a bad idea, my bad, no blame for Midas to write data from both bank to both odb trees ...

2. One needs the same amount of bank entries as the size of settings/names[] . Otherwise the "History Log" flag does not work. So just don`t us "names" but "channel names" or something. 

" Second, it's advisable to group similar equipment into one. Like if you have five power supplies powering
and experiment, you don't want to have five equipments Supply1, Supply2, ..., but only one equipment
"Power Supplies".  "

It would be nice if this also works with c++ style drivers, i.e. a instance of a class. I don`t now how one would give an entry point to the "DEVICE_DRIVER" struct then.



> I wanted to have a c++ style driver, e.g. a instance of a "PowerSupply" class. This was not compatible with the list of DEVICE_DRIVER structs, with needs a C function entry point with variable arguments. 
> 
> Anyways, I attach my odb. I believe the issue stands regardless of the specific design choice here. Setting the History Log flag copies the banks created to the "Variables" of every equipments initialized, leading to a mismatch between the names array, and the variables. Can be solved by not using FE history events, but Virtual, but the flag in the Equipment is confusing.  
> 
> Bank creation in readout function:
> 
> for(const auto& d: drivers)
> {
> ...
> ...
>   bk_create(pevent,bk_name, TID_FLOAT, (void **)&pdata);
> ...			
>   std::vector<float> voltage = d->GetVoltage();
>   std::vector<float> current = d->GetCurrent();
>   for(channels)
>   {
>     *pdata++ = voltage.at(iChannel);
>     *pdata++ = current.at(iChannel);
>   }
>   bk_close(pevent, pdata);
> }
    Reply  15 Dec 2020, Konstantin Olchanski, Forum, history and variables confusion 
I think you are facing several problems:

a) mlogger does not clearly explain what history names will be used for which entries
in /eq/xxx/variables. "mlogger -v" almost does it, but we also need
"mlogger -v -n" to "show what you will do, but do not do it yet".

b) the mfe.c and the device class driver structure is very dated, tries to "do c++ in c". If it works for you,
certainly use it, but if it confuses you (as it confuses me), it probably only takes a few lines of c++
to replace the whole thing (minus the actual device drivers, which is the meat of it).

It think today, you face a choice:
- invest some time to understand the old device driver framework
- invest some time to do it "by hand" in c++, write your own device drivers (or use 3rd party drivers or snarf the "c" drivers from midas). Use the TMFE C++ frontend class if you go this route.

I would estimate that both choices are about the same amount of work.

K.O.


> 1. ok, so calling the same readout functions from different equipments is just a bad idea, my bad, no blame for Midas to write data from both bank to both odb trees ...
> 
> 2. One needs the same amount of bank entries as the size of settings/names[] . Otherwise the "History Log" flag does not work. So just don`t us "names" but "channel names" or something. 
> 
> " Second, it's advisable to group similar equipment into one. Like if you have five power supplies powering
> and experiment, you don't want to have five equipments Supply1, Supply2, ..., but only one equipment
> "Power Supplies".  "
> 
> It would be nice if this also works with c++ style drivers, i.e. a instance of a class. I don`t now how one would give an entry point to the "DEVICE_DRIVER" struct then.
> 
> 
> 
> > I wanted to have a c++ style driver, e.g. a instance of a "PowerSupply" class. This was not compatible with the list of DEVICE_DRIVER structs, with needs a C function entry point with variable arguments. 
> > 
> > Anyways, I attach my odb. I believe the issue stands regardless of the specific design choice here. Setting the History Log flag copies the banks created to the "Variables" of every equipments initialized, leading to a mismatch between the names array, and the variables. Can be solved by not using FE history events, but Virtual, but the flag in the Equipment is confusing.  
> > 
> > Bank creation in readout function:
> > 
> > for(const auto& d: drivers)
> > {
> > ...
> > ...
> >   bk_create(pevent,bk_name, TID_FLOAT, (void **)&pdata);
> > ...			
> >   std::vector<float> voltage = d->GetVoltage();
> >   std::vector<float> current = d->GetCurrent();
> >   for(channels)
> >   {
> >     *pdata++ = voltage.at(iChannel);
> >     *pdata++ = current.at(iChannel);
> >   }
> >   bk_close(pevent, pdata);
> > }
Entry  16 Dec 2020, Isaac Labrie Boulay, Forum, Issues building banks. 
Hi all,

I'm currently trying to build events through doing block transfers. The worry was 
that organizing and packaging bank data into an array would produce too much dead 
time causing too many missed events. Trying out that method, I'm running into all 
sorts of issues such as unaligned transfers where the QDC events are unaligned, or 
improperly aligned banks. Giving me a headache.

My question is, if I were to revert back to simple 32 bit read cycles and using 
the fevme.cxx template's method of organizing data before sending them to the 
buffer, what kind of deadtime should I expect? Am I wrong to assume that this 
would result in deadtime at all? I'm using a CAEN V792n 16 channel QDC and the hit 
frequency that I'm using to test is 20kHz.

Thanks.

Isaac
    Reply  16 Dec 2020, Konstantin Olchanski, Forum, Issues building banks. 
> I'm currently trying to build events through doing block transfers.

I am confused by your question. I assume you read a CAEN V792 ADC, but I do not know what VME master you 
use. The restrictions on data alignment come from the VME master.
I am mostly familiar with restrictions of UniverseII and tsi148 PCI-VME bridges.
I think there is no restriction for USB-VME bridges and similar.

Anyhow. Which block transfer do you use? 32-bit block transfer (BLT32)? 64-bit block transfer (MBLT64)? 
(no 128-bit 2eVME/2eSST transfers from the V792). Maybe the "simulated block transfer" (DMA engine uses 
single-word reads instead of block transfer)?

> The worry was that organizing and packaging bank data into an array would produce too much dead time 
causing too many missed events.

Valid concerns.

> I'm running into all sorts of issues such as unaligned transfers where the QDC events are unaligned, or 
improperly aligned banks.

You should not see any problems with unaligned transfers if you give the DMA engine
correct memory addresses as required by the hardware:

- always aligned to 32-bit (4 bytes, last two address bits set to 0)
- aligned to 64-bits for MBLT64 64-bit transfers, this would be the normal case for the V792 (8 bytes, 
last 3 address bits set to 0)
- aligned to 128-bits for 2eVME/2eSST transfers (16 bytes, last 4 bits of address are zero).

You also need to specify correct amount of data to read: number of bytes should be multiple of 4 for 32-
bit transfers, multiple to 8 for 64-bit transfers and multiple of 16 for 128-bit transfers (2eVME/2eSST).

Very often this requires reading "extra" data words. Most VME modules can generate extra pad words to 
align event length to DMA restrictions. Sometimes you need to
enable this in a control register (V792, V1190).

> Giving me a headache.

Me too. MIDAS recently introduced the QWORD 64-bit data type, banks of this type
should have correct alignment for 64-bit VME block transfers. But for 2eVME/2eSST
transfers, I still have to ensure alignment "by hand" (SIS3820, VF48, etc).

With QWORD banks, you need to use bk_init32a() instead of bk_init32().

> My question is, if I were to revert back to simple 32 bit read cycles

Yes, I always test with single-word reads first, with the 32-bit block transfer second and try the 64-bit 
block transfer last.

Sometimes there are unrelated problems (with the VME modules, VME bus, etc, or
with bugs in the frontend, etc) and this approach helps to identify the source
of trouble.

> and using 
> the fevme.cxx template's method of organizing data before sending them to the 
> buffer, what kind of deadtime should I expect? Am I wrong to assume that this 
> would result in deadtime at all? I'm using a CAEN V792n 16 channel QDC and the hit 
> frequency that I'm using to test is 20kHz.

Yes, with asynchronous read using 64-bit block transfer, 20 kHz should be achievable.

The old fevme frontend is based on the mfe.c framework and implementing
async readout requires special contortions. The structure of the new TMFE C++ frontend
class is supposed to make it easier, but I do not have an example TMFE based fevme yet.

P.S. Without using block transfer, your max rate is limited to:

16 channels, 1 word per channel, plus 1 header and 1 footer = 18 words (by luck, 64-bit aligned for 
correct BLT64 block read).

using VME single-word read at 1 us per transfer, 18 us per event = 55 kHz repetition rate.

(you do not say if you have any other VME modules you have to read)

K.O.
    Reply  16 Dec 2020, Isaac Labrie Boulay, Forum, Issues building banks. 
Thanks for the quick reply,

> > I'm currently trying to build events through doing block transfers.
> 
> I am confused by your question. I assume you read a CAEN V792 ADC, but I do not know what VME master you 
> use. The restrictions on data alignment come from the VME master.
> I am mostly familiar with restrictions of UniverseII and tsi148 PCI-VME bridges.
> I think there is no restriction for USB-VME bridges and similar.
> 
> Anyhow. Which block transfer do you use? 32-bit block transfer (BLT32)? 64-bit block transfer (MBLT64)? 
> (no 128-bit 2eVME/2eSST transfers from the V792). Maybe the "simulated block transfer" (DMA engine uses 
> single-word reads instead of block transfer)?

I read a single CAEN V792n QDC, 18 words, and a single CAEN V1190 TDC, 2 channels so 8 words. When I poll, I 
read on every poll_event() and read whatever data is in whatever module (TDC_dataready || QDC_dataready). The 
VME master that I'm using to talk to the modules is a CAEN V1718. I am trying to read data by BLT32. Sorry for 
the confusing question (Can you tell I'm an intern?).

> > The worry was that organizing and packaging bank data into an array would produce too much dead time 
> causing too many missed events.
> 
> Valid concerns.
> 
> > I'm running into all sorts of issues such as unaligned transfers where the QDC events are unaligned, or 
> improperly aligned banks.
> 
> You should not see any problems with unaligned transfers if you give the DMA engine
> correct memory addresses as required by the hardware:
> 
> - always aligned to 32-bit (4 bytes, last two address bits set to 0)
> - aligned to 64-bits for MBLT64 64-bit transfers, this would be the normal case for the V792 (8 bytes, 
> last 3 address bits set to 0)
> - aligned to 128-bits for 2eVME/2eSST transfers (16 bytes, last 4 bits of address are zero).
> 
> You also need to specify correct amount of data to read: number of bytes should be multiple of 4 for 32-
> bit transfers, multiple to 8 for 64-bit transfers and multiple of 16 for 128-bit transfers (2eVME/2eSST).

I am transferring 32-bit words. Transferring 32-bit words should always read multiples of 4 bytes so that's 
good.

> Very often this requires reading "extra" data words. Most VME modules can generate extra pad words to 
> align event length to DMA restrictions. Sometimes you need to
> enable this in a control register (V792, V1190).
> 
> > Giving me a headache.
> 
> Me too. MIDAS recently introduced the QWORD 64-bit data type, banks of this type
> should have correct alignment for 64-bit VME block transfers. But for 2eVME/2eSST
> transfers, I still have to ensure alignment "by hand" (SIS3820, VF48, etc).
> 
> With QWORD banks, you need to use bk_init32a() instead of bk_init32().
> 
> > My question is, if I were to revert back to simple 32 bit read cycles
> 
> Yes, I always test with single-word reads first, with the 32-bit block transfer second and try the 64-bit 
> block transfer last.
> 
> Sometimes there are unrelated problems (with the VME modules, VME bus, etc, or
> with bugs in the frontend, etc) and this approach helps to identify the source
> of trouble.
> 
> > and using 
> > the fevme.cxx template's method of organizing data before sending them to the 
> > buffer, what kind of deadtime should I expect? Am I wrong to assume that this 
> > would result in deadtime at all? I'm using a CAEN V792n 16 channel QDC and the hit 
> > frequency that I'm using to test is 20kHz.
> 
> Yes, with asynchronous read using 64-bit block transfer, 20 kHz should be achievable.
> 
> The old fevme frontend is based on the mfe.c framework and implementing
> async readout requires special contortions. The structure of the new TMFE C++ frontend
> class is supposed to make it easier, but I do not have an example TMFE based fevme yet.
> 
> P.S. Without using block transfer, your max rate is limited to:
> 
> 16 channels, 1 word per channel, plus 1 header and 1 footer = 18 words (by luck, 64-bit aligned for 
> correct BLT64 block read).
> 
> using VME single-word read at 1 us per transfer, 18 us per event = 55 kHz repetition rate.
> 
> (you do not say if you have any other VME modules you have to read)
> 

Okay so transferring 18 + 6 words should give me close to 40kHz repetition rate. That's good news. I will just 
stick to 1 word transfers.

The way that transfers are done in the fevme.cxx requires iterating through 16 word arrays a number of time (3 
times I believe if you include the iterations taking place in v792_EventRead()). Does that not pose a 
significant deadtime concern? 

> K.O.

Thanks again for taking the time to help me out!

Cheers.

Isaac
    Reply  16 Dec 2020, Konstantin Olchanski, Forum, Issues building banks. 
> > > I'm currently trying to build events through doing block transfers.
> > 
> > I am confused by your question. I assume you read a CAEN V792 ADC, but I do not know what VME master you 
> > use. The restrictions on data alignment come from the VME master.
> > I am mostly familiar with restrictions of UniverseII and tsi148 PCI-VME bridges.
> > I think there is no restriction for USB-VME bridges and similar.
> > 
> > Anyhow. Which block transfer do you use? 32-bit block transfer (BLT32)? 64-bit block transfer (MBLT64)? 
> > (no 128-bit 2eVME/2eSST transfers from the V792). Maybe the "simulated block transfer" (DMA engine uses 
> > single-word reads instead of block transfer)?
> 
> I read a single CAEN V792n QDC, 18 words, and a single CAEN V1190 TDC, 2 channels so 8 words. When I poll, I 
> read on every poll_event() and read whatever data is in whatever module (TDC_dataready || QDC_dataready). The 
> VME master that I'm using to talk to the modules is a CAEN V1718. I am trying to read data by BLT32. Sorry for 
> the confusing question (Can you tell I'm an intern?).
> 

Ok, I see. Using the normal mfe.c structure, you will not be able to read the VME modules
at maximum speed. This is because you must have two concurrent activities happening at the same time:

(1) tell the VME bridge to read data,
(2) package this data into midas banks and events and write it to the MIDAS event buffer.

If you do these tasks sequentially, obviously the VME bus will be idle during step (2),
and unless (2) takes 0 seconds (it does not) you will have a slow down.

So for maximum data rate, I prefer to have 3 threads:

thread 1: run the VME transfers, store data in circular buffer (today it would be std::deque<std::vector<char>>)
thread 2: encode the data into midas banks and midas events, store completed events in a circular buffer 
(std::deque<EVENT_HEADER*>).
thread 3: write data to midas event buffer (call bm_send_event(), etc)

This is very hard to do using the mfe.c frontend. (the main reason I wrote the TMFE C++ frontend class).

>
> Okay so transferring 18 + 6 words should give me close to 40kHz repetition rate. That's good news. I will just 
> stick to 1 word transfers.
>

I do not know the timing of CAEN V1718 single-word transfers. It may be significantly longer than 1 us:

V7865: DWORD read - CPU - PCI bus - tsi148 - VME
V1718: encode request as USB packet - CPU - PCI bus - USB hub - USB bus - USB asic - FPGA - VME (on the way back, 
"extract data from USB packet")

> 
> The way that transfers are done in the fevme.cxx requires iterating through 16 word arrays a number of time (3 
> times I believe if you include the iterations taking place in v792_EventRead()). Does that not pose a 
> significant deadtime concern? 
> 

Hmm... I am not sure what fevme you refer to. I guess I can find version of fevme.cxx where data is read at
maximum VME speed if you want it.

K.O.
    Reply  16 Dec 2020, Isaac Labrie Boulay, Forum, Issues building banks. 
> > > > I'm currently trying to build events through doing block transfers.
> > > 
> > > I am confused by your question. I assume you read a CAEN V792 ADC, but I do not know what VME master you 
> > > use. The restrictions on data alignment come from the VME master.
> > > I am mostly familiar with restrictions of UniverseII and tsi148 PCI-VME bridges.
> > > I think there is no restriction for USB-VME bridges and similar.
> > > 
> > > Anyhow. Which block transfer do you use? 32-bit block transfer (BLT32)? 64-bit block transfer (MBLT64)? 
> > > (no 128-bit 2eVME/2eSST transfers from the V792). Maybe the "simulated block transfer" (DMA engine uses 
> > > single-word reads instead of block transfer)?
> > 
> > I read a single CAEN V792n QDC, 18 words, and a single CAEN V1190 TDC, 2 channels so 8 words. When I poll, I 
> > read on every poll_event() and read whatever data is in whatever module (TDC_dataready || QDC_dataready). The 
> > VME master that I'm using to talk to the modules is a CAEN V1718. I am trying to read data by BLT32. Sorry for 
> > the confusing question (Can you tell I'm an intern?).
> > 
> 
> Ok, I see. Using the normal mfe.c structure, you will not be able to read the VME modules
> at maximum speed. This is because you must have two concurrent activities happening at the same time:
> 

I am using the mfe.cxx backend thread, I'm guessing that this is the file you are referring to.

> (1) tell the VME bridge to read data,
> (2) package this data into midas banks and events and write it to the MIDAS event buffer.
> 
> If you do these tasks sequentially, obviously the VME bus will be idle during step (2),
> and unless (2) takes 0 seconds (it does not) you will have a slow down.
> 

I see.


> So for maximum data rate, I prefer to have 3 threads:
> 
> thread 1: run the VME transfers, store data in circular buffer (today it would be std::deque<std::vector<char>>)
> thread 2: encode the data into midas banks and midas events, store completed events in a circular buffer 
> (std::deque<EVENT_HEADER*>).
> thread 3: write data to midas event buffer (call bm_send_event(), etc)
> 
> This is very hard to do using the mfe.c frontend. (the main reason I wrote the TMFE C++ frontend class).

Yes it seems like a bit of work

> >
> > Okay so transferring 18 + 6 words should give me close to 40kHz repetition rate. That's good news. I will just 
> > stick to 1 word transfers.
> >
> 
> I do not know the timing of CAEN V1718 single-word transfers. It may be significantly longer than 1 us:
> 
> V7865: DWORD read - CPU - PCI bus - tsi148 - VME
> V1718: encode request as USB packet - CPU - PCI bus - USB hub - USB bus - USB asic - FPGA - VME (on the way back, 
> "extract data from USB packet")

I found the following information in the CAEN V1718 manual:

"Transfer Rate = ~30MByte/s. Transfer rate supported in MBLT read cycles (block size = 32 kb), using a PC host with 
Windows XP or Linux and High Speed USB"

I'm guessing the sentence simply means that the rate increases with multiplexed block transfers. If the transfer rate 
is 30MBytes/s I should be able to write words at a transfer rate of 7500000 words per second.

> 
> > 
> > The way that transfers are done in the fevme.cxx requires iterating through 16 word arrays a number of time (3 
> > times I believe if you include the iterations taking place in v792_EventRead()). Does that not pose a 
> > significant deadtime concern? 
> > 
> 
> Hmm... I am not sure what fevme you refer to. I guess I can find version of fevme.cxx where data is read at
> maximum VME speed if you want it.

This is the VME C++ frontend example in the directory /midas/examples/Triumf/c++/

If you can find a faster version of this code I would definitely like to check it out!

> 
> K.O.


Thanks again.

Isaac
    Reply  16 Dec 2020, Stefan Ritt, Forum, Issues building banks. 
> This is very hard to do using the mfe.c frontend. (the main reason I wrote the TMFE C++ frontend class).

Actually that's not true. Just look at 

midas/examples/mtfe/mtfe.c

this is an example for a frontend with equipment with the EQ_USER flag, which allows you easily to run a separate 
thread (or more) for event collection and processing. Of course all old-fashioned C style (code is from 2007) but it 
works.

Stefan
    Reply  16 Dec 2020, Isaac Labrie Boulay, Forum, Issues building banks. 
> > This is very hard to do using the mfe.c frontend. (the main reason I wrote the TMFE C++ frontend class).
> 
> Actually that's not true. Just look at 
> 
> midas/examples/mtfe/mtfe.c
> 
> this is an example for a frontend with equipment with the EQ_USER flag, which allows you easily to run a separate 
> thread (or more) for event collection and processing. Of course all old-fashioned C style (code is from 2007) but it 
> works.
> 
> Stefan

Thank you sir I'll give it a look.

Cheers

Isaac
    Reply  08 Jan 2021, Stefan Ritt, Forum, history and variables confusion 
We kind of agreed to rewrite the slow control system in C++. Each device will have its own driver derived from a common base class implementing the general communication. The reason we need a "system" and not only a "hand-written" driver is because we want:

- glue many device drivers together for a single equipment
- have a dedicated readout thread for every device, in order not to block other devices
- have a common error reporting scheme working with several threads
- being able to disable/enable individual devices without changing the history system each time
- having a common naming scheme for all devices (like "enforce" /Equipment/<name>/Settings/Names xxx) which is needed by the history system
- ...

Will see when we have time for that.

Stefan
Entry  13 Jan 2021, Isaac Labrie Boulay, Forum, poll_event() is very slow. 
Hi all,

I'm currently trying to see if I can speed up polling in a frontend I'm testing. 
Currently it seems like I can't get 'lam's to happen faster than 120 times/second. 
There must be a way to make this faster. From what I understand, changing the poll 
time (500ms by default) won't affect the frequency of polling just the 'lam' 
period.

Any suggestions?

Thanks for your help!

Isaac

Hi,

What is the actual readout time, event size?
Do you have multiple equipment and of what type if any?

PAA
    Reply  13 Jan 2021, Konstantin Olchanski, Forum, poll_event() is very slow. 
> 
> I'm currently trying to see if I can speed up polling in a frontend I'm testing. 
> Currently it seems like I can't get 'lam's to happen faster than 120 times/second. 
> There must be a way to make this faster. From what I understand, changing the poll 
> time (500ms by default) won't affect the frequency of polling just the 'lam' 
> period.
> 
> Any suggestions?
> 

You could switch from the traditional midas mfe.c frontend to the C++ TMFE frontend,
where all this "lam" and "poll" business is removed.

At the moment, there are two example programs using the C++ TMFE frontend,
single threaded (progs/fetest_tmfe.cxx) and multithreaed (progs/fetest_tmfe_thread.cxx).

K.O.
    Reply  13 Jan 2021, Pierre-Andre Amaudruz, Forum, poll_event() is very slow. 
> Hi all,
> 
> I'm currently trying to see if I can speed up polling in a frontend I'm testing. 
> Currently it seems like I can't get 'lam's to happen faster than 120 times/second. 
> There must be a way to make this faster. From what I understand, changing the poll 
> time (500ms by default) won't affect the frequency of polling just the 'lam' 
> period.
> 
> Any suggestions?
> 
> Thanks for your help!
> 
> Isaac

Hi,

How many equipment do you have and of what type?
What is the measured readout time of your equipment?

As you mentioned the polling time define the maximum time you spend in the in polling call before checking other equipment and system activities. But as soon as you get a LAM during the polling loop, the event is readout. The readout time of this equipment is obviously to be considered as well.

In case you have multiple equipment, the readout time of the other equipment is to be taken in account as you wont return to your polling prior the completion of them.
    Reply  13 Jan 2021, Stefan Ritt, Forum, poll_event() is very slow. 
Something must be wrong on your side. If you take the example frontend under

midas/examples/experiment/frontend.cxx

and let it run to produce dummy events, you get about 90 Hz. This is because we have a

  ss_sleep(10);

in the read_trigger_event() routine to throttle things down. If you remove that sleep, 
you get an event rate of about 500'000 Hz. So the framework is really quick.

Probably your routine which looks for a 'lam' takes really long and should be fixed.

Stefan
    Reply  14 Jan 2021, Pintaudi Giorgio, Forum, poll_event() is very slow. 
> Something must be wrong on your side. If you take the example frontend under
> 
> midas/examples/experiment/frontend.cxx
> 
> and let it run to produce dummy events, you get about 90 Hz. This is because we have a
> 
>   ss_sleep(10);
> 
> in the read_trigger_event() routine to throttle things down. If you remove that sleep, 
> you get an event rate of about 500'000 Hz. So the framework is really quick.
> 
> Probably your routine which looks for a 'lam' takes really long and should be fixed.
> 
> Stefan

Sorry if I am going off-topic but, because the ss_sleep function was mentioned here, I 
would like to take the chance and report an issue that I am having.

In all my slow control frontends, the CPU usage for each frontend is close to 100%. This 
means that each frontend is monopolizing a single core. When I did some profiling, I 
noticed that 99% of the time is spent inside the ss_sleep function. Now, I would expect 
that the ss_sleep function should not require any CPU usage at all or very little.

So my two questions are:
Is this a bug or a feature?
Would you able to check/reproduce this behavior or do you need additional info from my 
side?
    Reply  14 Jan 2021, Isaac Labrie Boulay, Forum, poll_event() is very slow. 
> Something must be wrong on your side. If you take the example frontend under
> 
> midas/examples/experiment/frontend.cxx
> 
> and let it run to produce dummy events, you get about 90 Hz. This is because we have a
> 
>   ss_sleep(10);
> 
> in the read_trigger_event() routine to throttle things down. If you remove that sleep, 
> you get an event rate of about 500'000 Hz. So the framework is really quick.
> 
> Probably your routine which looks for a 'lam' takes really long and should be fixed.
> 
> Stefan

Hi Stefan,

I should mention that I was using midas/examples/Triumf/c++/fevme.cxx. I was trying to see 
the max speed so I had the 'lam' always = 1 with nothing else to add overhead in the 
poll_event(). I was getting <200 Hz. I am assuming that this is a bug. There is no 
ss_sleep() in that function.

Thanks for your quick response!

Isaac
    Reply  15 Jan 2021, Isaac Labrie Boulay, Forum, poll_event() is very slow. 
> > 
> > I'm currently trying to see if I can speed up polling in a frontend I'm testing. 
> > Currently it seems like I can't get 'lam's to happen faster than 120 times/second. 
> > There must be a way to make this faster. From what I understand, changing the poll 
> > time (500ms by default) won't affect the frequency of polling just the 'lam' 
> > period.
> > 
> > Any suggestions?
> > 
> 
> You could switch from the traditional midas mfe.c frontend to the C++ TMFE frontend,
> where all this "lam" and "poll" business is removed.
> 
> At the moment, there are two example programs using the C++ TMFE frontend,
> single threaded (progs/fetest_tmfe.cxx) and multithreaed (progs/fetest_tmfe_thread.cxx).
> 
> K.O.

Ok. I did not know that there was a C++ OOD frontend example in MIDAS. I'll take a look at 
it. Is there any documentation on it works?

Thanks for the support!

Isaac
    Reply  08 Feb 2021, Konstantin Olchanski, Forum, poll_event() is very slow. 
> I should mention that I was using midas/examples/Triumf/c++/fevme.cxx

this is correct, the fevme frontend is written to do 100% CPU-busy polling.

there is several reasons for this:
- on our VME processors, we have 2 core CPUs, 1st core can poll the VME bus, 2nd core can run 
mfe.c and the ethernet transmitter.
- interrupts are expensive to use (in latency and in cpu use) because kernel handler has to call 
use handler, return back etc
- sub-millisecond sleep used to be expensive and unreliable (on 1-2GHz "core 1" and "core 2" 
CPUs running SL6 and SL7 era linux). As I understand, current linux and current 3+GHz CPUs can 
do reliable microsecond sleep.

K.O.
Entry  10 Feb 2021, Isaac Labrie Boulay, Forum, Javascript error during run transitions. 
Hi all,

I am encountering a Javascript error (TypeError: client.error is undefined) when 
I transition between run states. Does anybody have an idea of what my problem 
might be? I have pasted an example of what MIDAS logs during such sequences.

Thanks for all the help!

Isaac


09:24:08.611 2021/02/10 [mhttpd,INFO] Executing script 
"~/ANIS_20210106/scripts/start_daq.sh" from ODB "/Script/Start DAQ"

09:24:13.833 2021/02/10 [Logger,LOG] Program Logger on host localhost started

09:24:28.598 2021/02/10 [fevme,LOG] Program fevme on host localhost started

09:24:33.951 2021/02/10 [mhttpd,INFO] Run #234 started

09:26:30.970 2021/02/10 [mhttpd,ERROR] [midas.cxx:4260:cm_transition_call,ERROR] 
Client "Logger" transition 2 aborted while waiting for client "fevme": 
"/Runinfo/Transition in progress" was cleared

09:26:31.015 2021/02/10 [mhttpd,ERROR] [midas.cxx:5120:cm_transition,ERROR] 
transition STOP aborted: "/Runinfo/Transition in progress" was cleared

09:27:27.270 2021/02/10 [mhttpd,ERROR] 
[system.cxx:4937:ss_recv_net_command,ERROR] timeout receiving network command 
header

09:27:27.270 2021/02/10 [mhttpd,ERROR] [midas.cxx:12262:rpc_client_call,ERROR] 
call to "fevme" on "localhost" RPC "rc_transition": timeout waiting for reply
ELOG V3.1.4-2e1708b5