04 Aug 2023, Konstantin Olchanski, Forum, Issues with Universe II Driver
|
> I can compile 32 bit midas. Unless I am interpreting the linking error, I don't
> think I can use the driver as built.
I think you are right, Makefile from the Universe package does not build a -m32 version
of libvme.so. I think I can fix that...
K.O. |
16 Dec 2020, Isaac Labrie Boulay, Forum, Issues building banks.
|
Hi all,
I'm currently trying to build events through doing block transfers. The worry was
that organizing and packaging bank data into an array would produce too much dead
time causing too many missed events. Trying out that method, I'm running into all
sorts of issues such as unaligned transfers where the QDC events are unaligned, or
improperly aligned banks. Giving me a headache.
My question is, if I were to revert back to simple 32 bit read cycles and using
the fevme.cxx template's method of organizing data before sending them to the
buffer, what kind of deadtime should I expect? Am I wrong to assume that this
would result in deadtime at all? I'm using a CAEN V792n 16 channel QDC and the hit
frequency that I'm using to test is 20kHz.
Thanks.
Isaac |
16 Dec 2020, Konstantin Olchanski, Forum, Issues building banks.
|
> I'm currently trying to build events through doing block transfers.
I am confused by your question. I assume you read a CAEN V792 ADC, but I do not know what VME master you
use. The restrictions on data alignment come from the VME master.
I am mostly familiar with restrictions of UniverseII and tsi148 PCI-VME bridges.
I think there is no restriction for USB-VME bridges and similar.
Anyhow. Which block transfer do you use? 32-bit block transfer (BLT32)? 64-bit block transfer (MBLT64)?
(no 128-bit 2eVME/2eSST transfers from the V792). Maybe the "simulated block transfer" (DMA engine uses
single-word reads instead of block transfer)?
> The worry was that organizing and packaging bank data into an array would produce too much dead time
causing too many missed events.
Valid concerns.
> I'm running into all sorts of issues such as unaligned transfers where the QDC events are unaligned, or
improperly aligned banks.
You should not see any problems with unaligned transfers if you give the DMA engine
correct memory addresses as required by the hardware:
- always aligned to 32-bit (4 bytes, last two address bits set to 0)
- aligned to 64-bits for MBLT64 64-bit transfers, this would be the normal case for the V792 (8 bytes,
last 3 address bits set to 0)
- aligned to 128-bits for 2eVME/2eSST transfers (16 bytes, last 4 bits of address are zero).
You also need to specify correct amount of data to read: number of bytes should be multiple of 4 for 32-
bit transfers, multiple to 8 for 64-bit transfers and multiple of 16 for 128-bit transfers (2eVME/2eSST).
Very often this requires reading "extra" data words. Most VME modules can generate extra pad words to
align event length to DMA restrictions. Sometimes you need to
enable this in a control register (V792, V1190).
> Giving me a headache.
Me too. MIDAS recently introduced the QWORD 64-bit data type, banks of this type
should have correct alignment for 64-bit VME block transfers. But for 2eVME/2eSST
transfers, I still have to ensure alignment "by hand" (SIS3820, VF48, etc).
With QWORD banks, you need to use bk_init32a() instead of bk_init32().
> My question is, if I were to revert back to simple 32 bit read cycles
Yes, I always test with single-word reads first, with the 32-bit block transfer second and try the 64-bit
block transfer last.
Sometimes there are unrelated problems (with the VME modules, VME bus, etc, or
with bugs in the frontend, etc) and this approach helps to identify the source
of trouble.
> and using
> the fevme.cxx template's method of organizing data before sending them to the
> buffer, what kind of deadtime should I expect? Am I wrong to assume that this
> would result in deadtime at all? I'm using a CAEN V792n 16 channel QDC and the hit
> frequency that I'm using to test is 20kHz.
Yes, with asynchronous read using 64-bit block transfer, 20 kHz should be achievable.
The old fevme frontend is based on the mfe.c framework and implementing
async readout requires special contortions. The structure of the new TMFE C++ frontend
class is supposed to make it easier, but I do not have an example TMFE based fevme yet.
P.S. Without using block transfer, your max rate is limited to:
16 channels, 1 word per channel, plus 1 header and 1 footer = 18 words (by luck, 64-bit aligned for
correct BLT64 block read).
using VME single-word read at 1 us per transfer, 18 us per event = 55 kHz repetition rate.
(you do not say if you have any other VME modules you have to read)
K.O. |
16 Dec 2020, Isaac Labrie Boulay, Forum, Issues building banks.
|
Thanks for the quick reply,
> > I'm currently trying to build events through doing block transfers.
>
> I am confused by your question. I assume you read a CAEN V792 ADC, but I do not know what VME master you
> use. The restrictions on data alignment come from the VME master.
> I am mostly familiar with restrictions of UniverseII and tsi148 PCI-VME bridges.
> I think there is no restriction for USB-VME bridges and similar.
>
> Anyhow. Which block transfer do you use? 32-bit block transfer (BLT32)? 64-bit block transfer (MBLT64)?
> (no 128-bit 2eVME/2eSST transfers from the V792). Maybe the "simulated block transfer" (DMA engine uses
> single-word reads instead of block transfer)?
I read a single CAEN V792n QDC, 18 words, and a single CAEN V1190 TDC, 2 channels so 8 words. When I poll, I
read on every poll_event() and read whatever data is in whatever module (TDC_dataready || QDC_dataready). The
VME master that I'm using to talk to the modules is a CAEN V1718. I am trying to read data by BLT32. Sorry for
the confusing question (Can you tell I'm an intern?).
> > The worry was that organizing and packaging bank data into an array would produce too much dead time
> causing too many missed events.
>
> Valid concerns.
>
> > I'm running into all sorts of issues such as unaligned transfers where the QDC events are unaligned, or
> improperly aligned banks.
>
> You should not see any problems with unaligned transfers if you give the DMA engine
> correct memory addresses as required by the hardware:
>
> - always aligned to 32-bit (4 bytes, last two address bits set to 0)
> - aligned to 64-bits for MBLT64 64-bit transfers, this would be the normal case for the V792 (8 bytes,
> last 3 address bits set to 0)
> - aligned to 128-bits for 2eVME/2eSST transfers (16 bytes, last 4 bits of address are zero).
>
> You also need to specify correct amount of data to read: number of bytes should be multiple of 4 for 32-
> bit transfers, multiple to 8 for 64-bit transfers and multiple of 16 for 128-bit transfers (2eVME/2eSST).
I am transferring 32-bit words. Transferring 32-bit words should always read multiples of 4 bytes so that's
good.
> Very often this requires reading "extra" data words. Most VME modules can generate extra pad words to
> align event length to DMA restrictions. Sometimes you need to
> enable this in a control register (V792, V1190).
>
> > Giving me a headache.
>
> Me too. MIDAS recently introduced the QWORD 64-bit data type, banks of this type
> should have correct alignment for 64-bit VME block transfers. But for 2eVME/2eSST
> transfers, I still have to ensure alignment "by hand" (SIS3820, VF48, etc).
>
> With QWORD banks, you need to use bk_init32a() instead of bk_init32().
>
> > My question is, if I were to revert back to simple 32 bit read cycles
>
> Yes, I always test with single-word reads first, with the 32-bit block transfer second and try the 64-bit
> block transfer last.
>
> Sometimes there are unrelated problems (with the VME modules, VME bus, etc, or
> with bugs in the frontend, etc) and this approach helps to identify the source
> of trouble.
>
> > and using
> > the fevme.cxx template's method of organizing data before sending them to the
> > buffer, what kind of deadtime should I expect? Am I wrong to assume that this
> > would result in deadtime at all? I'm using a CAEN V792n 16 channel QDC and the hit
> > frequency that I'm using to test is 20kHz.
>
> Yes, with asynchronous read using 64-bit block transfer, 20 kHz should be achievable.
>
> The old fevme frontend is based on the mfe.c framework and implementing
> async readout requires special contortions. The structure of the new TMFE C++ frontend
> class is supposed to make it easier, but I do not have an example TMFE based fevme yet.
>
> P.S. Without using block transfer, your max rate is limited to:
>
> 16 channels, 1 word per channel, plus 1 header and 1 footer = 18 words (by luck, 64-bit aligned for
> correct BLT64 block read).
>
> using VME single-word read at 1 us per transfer, 18 us per event = 55 kHz repetition rate.
>
> (you do not say if you have any other VME modules you have to read)
>
Okay so transferring 18 + 6 words should give me close to 40kHz repetition rate. That's good news. I will just
stick to 1 word transfers.
The way that transfers are done in the fevme.cxx requires iterating through 16 word arrays a number of time (3
times I believe if you include the iterations taking place in v792_EventRead()). Does that not pose a
significant deadtime concern?
> K.O.
Thanks again for taking the time to help me out!
Cheers.
Isaac |
16 Dec 2020, Konstantin Olchanski, Forum, Issues building banks.
|
> > > I'm currently trying to build events through doing block transfers.
> >
> > I am confused by your question. I assume you read a CAEN V792 ADC, but I do not know what VME master you
> > use. The restrictions on data alignment come from the VME master.
> > I am mostly familiar with restrictions of UniverseII and tsi148 PCI-VME bridges.
> > I think there is no restriction for USB-VME bridges and similar.
> >
> > Anyhow. Which block transfer do you use? 32-bit block transfer (BLT32)? 64-bit block transfer (MBLT64)?
> > (no 128-bit 2eVME/2eSST transfers from the V792). Maybe the "simulated block transfer" (DMA engine uses
> > single-word reads instead of block transfer)?
>
> I read a single CAEN V792n QDC, 18 words, and a single CAEN V1190 TDC, 2 channels so 8 words. When I poll, I
> read on every poll_event() and read whatever data is in whatever module (TDC_dataready || QDC_dataready). The
> VME master that I'm using to talk to the modules is a CAEN V1718. I am trying to read data by BLT32. Sorry for
> the confusing question (Can you tell I'm an intern?).
>
Ok, I see. Using the normal mfe.c structure, you will not be able to read the VME modules
at maximum speed. This is because you must have two concurrent activities happening at the same time:
(1) tell the VME bridge to read data,
(2) package this data into midas banks and events and write it to the MIDAS event buffer.
If you do these tasks sequentially, obviously the VME bus will be idle during step (2),
and unless (2) takes 0 seconds (it does not) you will have a slow down.
So for maximum data rate, I prefer to have 3 threads:
thread 1: run the VME transfers, store data in circular buffer (today it would be std::deque<std::vector<char>>)
thread 2: encode the data into midas banks and midas events, store completed events in a circular buffer
(std::deque<EVENT_HEADER*>).
thread 3: write data to midas event buffer (call bm_send_event(), etc)
This is very hard to do using the mfe.c frontend. (the main reason I wrote the TMFE C++ frontend class).
>
> Okay so transferring 18 + 6 words should give me close to 40kHz repetition rate. That's good news. I will just
> stick to 1 word transfers.
>
I do not know the timing of CAEN V1718 single-word transfers. It may be significantly longer than 1 us:
V7865: DWORD read - CPU - PCI bus - tsi148 - VME
V1718: encode request as USB packet - CPU - PCI bus - USB hub - USB bus - USB asic - FPGA - VME (on the way back,
"extract data from USB packet")
>
> The way that transfers are done in the fevme.cxx requires iterating through 16 word arrays a number of time (3
> times I believe if you include the iterations taking place in v792_EventRead()). Does that not pose a
> significant deadtime concern?
>
Hmm... I am not sure what fevme you refer to. I guess I can find version of fevme.cxx where data is read at
maximum VME speed if you want it.
K.O. |
16 Dec 2020, Isaac Labrie Boulay, Forum, Issues building banks.
|
> > > > I'm currently trying to build events through doing block transfers.
> > >
> > > I am confused by your question. I assume you read a CAEN V792 ADC, but I do not know what VME master you
> > > use. The restrictions on data alignment come from the VME master.
> > > I am mostly familiar with restrictions of UniverseII and tsi148 PCI-VME bridges.
> > > I think there is no restriction for USB-VME bridges and similar.
> > >
> > > Anyhow. Which block transfer do you use? 32-bit block transfer (BLT32)? 64-bit block transfer (MBLT64)?
> > > (no 128-bit 2eVME/2eSST transfers from the V792). Maybe the "simulated block transfer" (DMA engine uses
> > > single-word reads instead of block transfer)?
> >
> > I read a single CAEN V792n QDC, 18 words, and a single CAEN V1190 TDC, 2 channels so 8 words. When I poll, I
> > read on every poll_event() and read whatever data is in whatever module (TDC_dataready || QDC_dataready). The
> > VME master that I'm using to talk to the modules is a CAEN V1718. I am trying to read data by BLT32. Sorry for
> > the confusing question (Can you tell I'm an intern?).
> >
>
> Ok, I see. Using the normal mfe.c structure, you will not be able to read the VME modules
> at maximum speed. This is because you must have two concurrent activities happening at the same time:
>
I am using the mfe.cxx backend thread, I'm guessing that this is the file you are referring to.
> (1) tell the VME bridge to read data,
> (2) package this data into midas banks and events and write it to the MIDAS event buffer.
>
> If you do these tasks sequentially, obviously the VME bus will be idle during step (2),
> and unless (2) takes 0 seconds (it does not) you will have a slow down.
>
I see.
> So for maximum data rate, I prefer to have 3 threads:
>
> thread 1: run the VME transfers, store data in circular buffer (today it would be std::deque<std::vector<char>>)
> thread 2: encode the data into midas banks and midas events, store completed events in a circular buffer
> (std::deque<EVENT_HEADER*>).
> thread 3: write data to midas event buffer (call bm_send_event(), etc)
>
> This is very hard to do using the mfe.c frontend. (the main reason I wrote the TMFE C++ frontend class).
Yes it seems like a bit of work
> >
> > Okay so transferring 18 + 6 words should give me close to 40kHz repetition rate. That's good news. I will just
> > stick to 1 word transfers.
> >
>
> I do not know the timing of CAEN V1718 single-word transfers. It may be significantly longer than 1 us:
>
> V7865: DWORD read - CPU - PCI bus - tsi148 - VME
> V1718: encode request as USB packet - CPU - PCI bus - USB hub - USB bus - USB asic - FPGA - VME (on the way back,
> "extract data from USB packet")
I found the following information in the CAEN V1718 manual:
"Transfer Rate = ~30MByte/s. Transfer rate supported in MBLT read cycles (block size = 32 kb), using a PC host with
Windows XP or Linux and High Speed USB"
I'm guessing the sentence simply means that the rate increases with multiplexed block transfers. If the transfer rate
is 30MBytes/s I should be able to write words at a transfer rate of 7500000 words per second.
>
> >
> > The way that transfers are done in the fevme.cxx requires iterating through 16 word arrays a number of time (3
> > times I believe if you include the iterations taking place in v792_EventRead()). Does that not pose a
> > significant deadtime concern?
> >
>
> Hmm... I am not sure what fevme you refer to. I guess I can find version of fevme.cxx where data is read at
> maximum VME speed if you want it.
This is the VME C++ frontend example in the directory /midas/examples/Triumf/c++/
If you can find a faster version of this code I would definitely like to check it out!
>
> K.O.
Thanks again.
Isaac |
16 Dec 2020, Stefan Ritt, Forum, Issues building banks.
|
> This is very hard to do using the mfe.c frontend. (the main reason I wrote the TMFE C++ frontend class).
Actually that's not true. Just look at
midas/examples/mtfe/mtfe.c
this is an example for a frontend with equipment with the EQ_USER flag, which allows you easily to run a separate
thread (or more) for event collection and processing. Of course all old-fashioned C style (code is from 2007) but it
works.
Stefan |
16 Dec 2020, Isaac Labrie Boulay, Forum, Issues building banks.
|
> > This is very hard to do using the mfe.c frontend. (the main reason I wrote the TMFE C++ frontend class).
>
> Actually that's not true. Just look at
>
> midas/examples/mtfe/mtfe.c
>
> this is an example for a frontend with equipment with the EQ_USER flag, which allows you easily to run a separate
> thread (or more) for event collection and processing. Of course all old-fashioned C style (code is from 2007) but it
> works.
>
> Stefan
Thank you sir I'll give it a look.
Cheers
Isaac |
14 Nov 2024, Mann Gandhi, Suggestion, Issue with creating banks
|
Hello, I am a coop student working at SNOLAB. I am currently setting up a frontend
program to collect data for an experiment I am currently having with my bank being
initialized correctly with the correct name. I will attach an image of the error and
a code snippet for clarity. This is a multi-thread program using ring buffers. The
first thread is only responsible for data collection of ADC values from the Red
Pitaya (FPGA) and the second thread does a simple derivative calculation. The
frontend makes use of the TCP connection to stream data from the Red Pitaya.
Here is the code snippet. This is the only place in the frontend code where I
initialize and create a bank to store the ADC values from the Red Pitaya.
void* data_acquisition_thread(void* param)
{
printf("Data acquisition thread started\n");
// Obtain ring buffer for inter-thread data exchange
EVENT_HEADER *pevent;
WORD *pdata;
int status;
//Set a timeout for the recv function to prevent indefinite blocking
struct timeval timeout;
timeout.tv_sec = 10; //seconds
timeout.tv_usec = 0; // 0 microseconds
setsockopt(stream_sockfd, SOL_SOCKET, SO_RCVTIMEO, (char *)&timeout,
sizeof(timeout));
while (is_readout_thread_enabled())
{
if (!readout_enabled())
{
usleep(10); // do not produce events when run is stopped
continue;
}
// Acquire a write pointer in the ring buffer
int status;
do {
status = rb_get_wp(rbh, (void **) &pevent, 0);
if (status == DB_TIMEOUT)
{
usleep(5);
if (!is_readout_thread_enabled()) break;
}
} while (status != DB_SUCCESS);
if (status != DB_SUCCESS) continue;
// Lock mutex before accessing shared resources
pthread_mutex_lock(&lock);
// Buffer for incoming data
//int16_t temp_buffer[4096] = {0};
bm_compose_event_threadsafe(pevent, 1, 0, 0,
&equipment[0].serial_number);
pdata = (WORD *)(pevent + 1); // Set pdata to point to the data section of
the event
// Initialize the bank and read data directly into the bank
bk_init32(pevent);
bk_create(pevent, "RPD0", TID_WORD, (void **)&pdata);
int bytes_read = recv(stream_sockfd, pdata, max_event_size *
sizeof(WORD), 0);
printf("Data received: %d bytes\n", bytes_read);
if (bytes_read <= 0)
{
if (bytes_read == 0)
{
printf("Red Pitaya disconnected\n");
pthread_mutex_unlock(&lock);
break;
} else if (errno == EWOULDBLOCK || errno ==EAGAIN)
{
printf("Receive timeout\n");
pthread_mutex_unlock(&lock);
continue;
}
else
{
printf("Error reading from the Red Pitaya: %s\n",
strerror(errno));
pthread_mutex_unlock(&lock);
continue;
}
}
// Adjust data pointers after reading
pdata += bytes_read / sizeof(WORD);
bk_close(pevent, pdata);
pevent->data_size = bk_size(pevent);
// Unlock mutex after writing to the buffer
pthread_mutex_unlock(&lock);
// Send event to ring buffer
rb_increment_wp(rbh, sizeof(EVENT_HEADER) + pevent->data_size);
}
pthread_mutex_unlock(&lock);
return NULL;
} |
14 Nov 2024, Stefan Ritt, Suggestion, Issue with creating banks
|
All I can see is that your bank header gets corrupted along the way. The funny character reported by
cm_write_event_to_odb indicates that your original name "RPD0" got overwritten somewhere, but I could not spot any
mistake in your code.
I would play around: change max_event_size, produce dummy data of size N instead of the recv() and so on. Also monitor
the bank header to see when it gets overwritten. I guess you only write form one thread, so that should be safe, right?
Best,
Stefan |
14 Nov 2024, Mann Gandhi, Suggestion, Issue with creating banks
|
> All I can see is that your bank header gets corrupted along the way. The funny character reported by
> cm_write_event_to_odb indicates that your original name "RPD0" got overwritten somewhere, but I could not spot any
> mistake in your code.
>
> I would play around: change max_event_size, produce dummy data of size N instead of the recv() and so on. Also monitor
> the bank header to see when it gets overwritten. I guess you only write form one thread, so that should be safe, right?
>
> Best,
> Stefan
Hello Stefan,
Thank you for the advice. On inspection, I noticed that my event size (when I print bk_size(pevent)) is around 1.4 billion
which seems absurd so I am not sure why this is the case as well. In addition, is mdump the way to monitor the bank header?
I just recently started using MIDAS so I am a little bit confused. I can attach a link to the github repository where I am
currently working on this for further clarity since I am sure there is an issue in my code somewhere.
(https://github.com/mgandhi-1/red-pitaya-frontend/blob/10-issue-with-bank-creation-neeed-to-figure-out-why-banks-are-not-
being-created-correctly/frontend.cxx)
I appreciate the help. Thank you once more.
Best,
Mann |
15 Nov 2024, Konstantin Olchanski, Suggestion, Issue with creating banks
|
> Hello, I am a coop student working at SNOLAB.
> void* data_acquisition_thread(void* param)
> {
> EVENT_HEADER *pevent;
> if (complicated) {
> status = rb_get_wp(rbh, (void **) &pevent, 0);
> }
> bm_compose_event_threadsafe(pevent, 1, 0, 0, &equipment[0].serial_number);
> }
this code is buggy. it should read "EVENT_HEADER *pevent = NULL;" to avoid an uninitialized variable
and bm_compose_event() & co should be inside an "if (pevent != NULL)" block, unless you can absolutely
proove that rb_get_wp() is always called and pevent is never NULL. (even is somebody changes the code later).
if you build your code with "gcc -O2 -g -Wall -Wuninitialized" it would probably warn you about use of uninitilialized
"pevent".
P.S. for building multithreaded frontends, you are much better off starting from the c++ tmfe frontend framework,
a good starting point is study tmfe_example_everything.cxx.
K.O. |
09 Nov 2021, Francesco Renga, Forum, Issue in data writing speed
|
Dear all,
I've a frontend writing a quite big bunch of data into a MIDAS bank (16bit output from a 4MP photo camera).
I'm experiencing a writing speed problem that I don't understand. When the photo camera is triggered at a low rate (< 2 Hz)
writing into the bank takes a very short time for each event (indeed, what I measure is the time to write and go back
into the polling function). If I increase the rate to 4 Hz, I see that writing the first two events takes a sort time,
but the third event takes a very long time (hundreds of ms), then again the fourth and fifth events are very fast, and
the sixth is very slow. If I further increase the rate, every other event is very slow. The problem is not in the readout
of the camera, because if I just remove the bank writing and keep the camera readout, the problem disappears. Can you
explain this behavior? Is there any way to improve it?
Below you can also find the code I use to copy the data from the camera buffer into the bank. If you have any suggestion
to improve it, it would be really appreciated.
Thank you very much,
Francesco
const char* pSrc = (const char*)bufframe.buf;
for(int y = 0; y < bufframe.height; y++ ){
//Copy one row
const unsigned short* pDst = (const unsigned short*)pSrc;
//go through the row
for(int x = 0; x < bufframe.width; x++ ){
WORD tmpData = *pDst++;
*pdata++ = tmpData;
}
pSrc += bufframe.rowbytes;
}
|
10 Nov 2021, Stefan Ritt, Forum, Issue in data writing speed
|
Midas uses various buffers (in the frontend, at the server side before the SYSTEM buffer, the SYSTEM buffer itself, on the
logger before writing to disk. All these buffers are in RAM and have fast access, so you can fill them pretty quickly. When
they are full, the logger writes to disk, which is slower. So I believe at 2 Hz your disk can keep up with your writing
speed, but at 4 Hz (2x8MBx4=32 MB/sec) your disk starts slowing down the writing process. Now 32MB/s is pretty slow for
a disk, so I presume you have turned compression on which takes quite some time.
To verify this, disable logging. The disable compression and keep logging. Then report back here again.
> Dear all,
> I've a frontend writing a quite big bunch of data into a MIDAS bank (16bit output from a 4MP photo camera).
> I'm experiencing a writing speed problem that I don't understand. When the photo camera is triggered at a low rate (< 2 Hz)
> writing into the bank takes a very short time for each event (indeed, what I measure is the time to write and go back
> into the polling function). If I increase the rate to 4 Hz, I see that writing the first two events takes a sort time,
> but the third event takes a very long time (hundreds of ms), then again the fourth and fifth events are very fast, and
> the sixth is very slow. If I further increase the rate, every other event is very slow. The problem is not in the readout
> of the camera, because if I just remove the bank writing and keep the camera readout, the problem disappears. Can you
> explain this behavior? Is there any way to improve it?
>
> Below you can also find the code I use to copy the data from the camera buffer into the bank. If you have any suggestion
> to improve it, it would be really appreciated.
>
> Thank you very much,
> Francesco
>
>
>
> const char* pSrc = (const char*)bufframe.buf;
>
> for(int y = 0; y < bufframe.height; y++ ){
>
> //Copy one row
> const unsigned short* pDst = (const unsigned short*)pSrc;
>
> //go through the row
> for(int x = 0; x < bufframe.width; x++ ){
>
> WORD tmpData = *pDst++;
>
> *pdata++ = tmpData;
>
> }
>
> pSrc += bufframe.rowbytes;
>
> }
> |
26 Jan 2022, Konstantin Olchanski, Forum, Issue in data writing speed
|
Francesco, when you say "writing an event is slow", do you mean it in the frontend
or in the output data file?
Stefan is quite right about the data file, it can take seconds between generating
an event in the frontend and seeing it written to the data file. (if compression
buffers are too big, an event can sit there forever, until pushed out by next events
or by run stop).
But maybe you see this on the frontend side.
What you are looking at is "real time" performance of the frontend and of the linux kernel.
The mfe.c frontend has many problems with real time performance, it can stall and take a long
time between calls to read_event(), for many reasons.
There are ways around that, but it is simpler to switch to the tmfe c++ frontend
that was designed for good real time performance.
In the tmfe frontend, if you use the polled equipment and enable the poll thread,
your frontend will be limited only by the linux kernel real time performance (i.e.
on a single-core CPU, other programs will delay execution of your frontend
and you will see it as long delays (usec, millisec) between calls to your read_event().
Next limit to real time performance (common to mfe.c and tmfe frontends) is the writing
of event data to the midas shared event buffer. One has to lock the shared memory semaphore
and this has to wait until other users of the event buffer finish their reading
or writing and unlock it. Arbitrary amount of time (usec, millisec, sec) can pass.
(there is also problems with "fairness" of the linux semaphores, a different story, again).
Making things more interesting, midas event buffers implement a write cache (default size 100 kbytes),
events smaller than the cache are quickly accumulated (no need to lock the shared memory semaphore),
them flushed to shared memory when cache is full. This is done to reduce the number
of shared memory semaphore locks per event, in the case of very high rate of very small events.
Solution to all this is to use 2 threads: read the data from hardware in one thread and write the data to midas
in a different thread. Between the threads would be an event fifo (circular buffer in mfe.c,
std::deque<EVENT> in tmfe c++ frontends).
For remote connected frontends, things are a bit different. Event data is written directly
into the TCP socket and as long as socket buffers are big enough, there is no real-time delays,
unless SYSTEM buffer is very congested and mserver does not read the TCP socket quickly enough.
So depending on event size, data rate and tcp socket buffer size, the extra 2nd thread
may not be necessary and poll thread real time performance may be good enough.
I hope this clarifies the situation somewhat.
K.O.
> Dear all,
> I've a frontend writing a quite big bunch of data into a MIDAS bank (16bit output from a 4MP photo camera).
> I'm experiencing a writing speed problem that I don't understand. When the photo camera is triggered at a low rate (< 2 Hz)
> writing into the bank takes a very short time for each event (indeed, what I measure is the time to write and go back
> into the polling function). If I increase the rate to 4 Hz, I see that writing the first two events takes a sort time,
> but the third event takes a very long time (hundreds of ms), then again the fourth and fifth events are very fast, and
> the sixth is very slow. If I further increase the rate, every other event is very slow. The problem is not in the readout
> of the camera, because if I just remove the bank writing and keep the camera readout, the problem disappears. Can you
> explain this behavior? Is there any way to improve it?
>
> Below you can also find the code I use to copy the data from the camera buffer into the bank. If you have any suggestion
> to improve it, it would be really appreciated.
>
> Thank you very much,
> Francesco
>
>
>
> const char* pSrc = (const char*)bufframe.buf;
>
> for(int y = 0; y < bufframe.height; y++ ){
>
> //Copy one row
> const unsigned short* pDst = (const unsigned short*)pSrc;
>
> //go through the row
> for(int x = 0; x < bufframe.width; x++ ){
>
> WORD tmpData = *pDst++;
>
> *pdata++ = tmpData;
>
> }
>
> pSrc += bufframe.rowbytes;
>
> }
> |
26 Jan 2022, Konstantin Olchanski, Forum, Issue in data writing speed
|
> Francesco, when you say "writing an event is slow", do you mean it in the frontend
> or in the output data file?
Another explanation just occurred to me. We do not know your event size and we do not
know the size of your SYSTEM buffer. But if you have an unlucky combination,
this can happen:
Consider event size is 6 Mbytes, buffer size is 8 Mbytes, enough space for only 1 event.
First event is written quickly (buffer is empty).
Second event will be delayed, there is not enough free space in the buffer, we have
to wait for mlogger to finish reading the first event.
Same thing happens if event size is 3 Mbytes, the first 2 events will write quickly,
writing the 3rd event will be delayed until mlogger does it's thing.
The mlogger reads the SYSTEM buffer "fast" and "quickly", but it can be delayed
for a number of reasons, i.e. handling a history event, a delay writing to disk,
a delay writing to network connected storage, etc.
In general, it is best to size the SYSTEM buffer to hold about 1 second worth
of data (of average size, average rate). If your event size is 4 Mbytes, and you
record them at 10/sec, SYSTEM buffer should be at least 40 Mbytes big. (this is
set in ODB /Experiment/Buffer Sizes). (MIDAS event buffer size is limited to 2 GBytes).
K.O. |
24 Nov 2020, Isaac Labrie Boulay, Forum, Invalid name "Analyzer/Tests"
|
Hi everyone,
I've recently took the analyzer template from $MIDASSYS/examples/experiment and
modified it to be able to use Roody on a very simple frontend setup. The
analyzer works fine and I am able to view the online histograms but my console
prints out this error:
[Analyzer,ERROR] [odb.cxx:919:db_validate_name,ERROR] Invalid name
"/Analyzer/Tests/Always true/Rate [Hz]" passed to db_create_key_wlocked: should
not contain "["
[Analyzer,ERROR] [odb.cxx:919:db_validate_name,ERROR] Invalid name
"/Analyzer/Tests/low_sum/Rate [Hz]" passed to db_create_key_wlocked: should not
contain "["
[Analyzer,ERROR] [odb.cxx:919:db_validate_name,ERROR] Invalid name
"/Analyzer/Tests/high_sum/Rate [Hz]" passed to db_create_key_wlocked: should not
contain "["
The error keeps getting printed even after stopping the run.
I do have these 3 keys under Analyzer/Tests/ in my ODB but I do not know where
they come from. Any suggestions on what the root of the issue is?
Thanks for the help!
Isaac |
27 Nov 2020, Konstantin Olchanski, Forum, Invalid name "Analyzer/Tests"
|
> I've recently took the analyzer template from $MIDASSYS/examples/experiment and
> modified it to be able to use Roody on a very simple frontend setup.
Hmm... the old midas analyzer framework is very old and I do not recommend
to use it for new experiments.
A newer analyzer system is ROOTANA and an even newer is the "m" analyzer (manalyzer). These
analyzers progressively introduce improved c++-style programming environments amongst other
improvements. If starting from scratch, I recommend that you use manalyzer (currently from the rootana
git repository).
> The analyzer works fine and I am able to view the online histograms but my console
> prints out this error:
>
> [Analyzer,ERROR] [odb.cxx:919:db_validate_name,ERROR] Invalid name
> "/Analyzer/Tests/Always true/Rate [Hz]" passed to db_create_key_wlocked: should
> not contain "["
The error says what it means. "[" is not a permitted character in odb names. It is used
by many odb functions to access array elements.
The midas analyzer example should be updated to change "[Hz]" to "(Hz)" or something similar.
K.O. |
27 Nov 2020, Konstantin Olchanski, Forum, Invalid name "Analyzer/Tests"
|
https://bitbucket.org/tmidas/midas/issues/298/invalid-odb-names-in-example-midas
K.O. |
07 Dec 2020, Isaac Labrie Boulay, Forum, Invalid name "Analyzer/Tests"
|
> https://bitbucket.org/tmidas/midas/issues/298/invalid-odb-names-in-example-midas
> K.O.
Hi K.O.
Ok I see, I will use the most up to date analyzer.
Thanks a ton for your help.
Isaac |
|