DarkSide: Difference between revisions

From DaqWiki
Jump to navigation Jump to search
Line 1: Line 1:
= DarkSide =
= DarkSide =
The DarkSide-20k data acquisition is to operate in a triggerless mode. Meaning that every photodetection unit
(PDU) produces a data flow independently from its neighbours and no global decision is invoked for requesting data.
Also, the experiment is composed of several thousand channels that need to be processed together in order to apply
data filtering and/or data reduction before the final data recording to a storage device. The analysis time for a such
large number of channels requires substantial computer processing power. The Time Slice concept is to divide the
acquisition time into segments and submit them individually to a dedicated processor. Based on the duration of the
time segment, the analysis performance, and the processing power of the Time Slice Processor (TSP), an adequate
number of TSPs will be able to handle the continuous data stream from all the detectors.
The Time Slice concept is to segment the data-taking period across all the acquisition modules. The collected data
for each segment is then presented to a TSP for processing. Once the analysis is complete, the TSP informs an
independent application of its availability to process a new Time Slice.
Physics events happening across two consecutive Time slices is to be managed properly as the TSPs will not have
access to the previous one. This is addressed by duplicating a fraction of the time slice to the next TSP. This
overlapping time is to ensure that this extended segment covers the possible boundary events. It corresponds in our
case to the maximum electron drift time within the TPC (∼5ms). The default time slice duration is 1 second, meaning
that we will have 0.5% duplicate analyzed events in that overlap window.
Fig. 3 shows the run time segmentation (abscissa). Each segment is processed by a different TSP. The total number
of TSPs must be greater than the average time (in seconds) it takes to process 1 second of data.
A large number of PDMs implies that multiple digitizers are at work. Therefore the time segmentation mechanism
requires the transmission of a Time Slice Marker (TSM) to all the digitizers in order to ensure a proper segment
assembly based on the Time Slice number. The readout of the individual waveform digitizer (WFD) is performed by
a dedicated processor Front-End Processor (FEP). For similar processing power issues as for the TSPs, the FEP will
handle a subset of WFDs (2), meaning that in our case, 24 FEPs will collect all the digitizer data. Each FEP will
have to read out, filter, and assemble the data segments from the WFDs covering the predefined time slice duration.
The management of the transmission of the data fragment to individual TSPs is left to the Pool Manager (PM)
application. Its role is to receive ”idle” notification from any TSPs (once the previous time slice analysis has been
completed) and broadcast to all the FEPs the destination address for the upcoming segment to the next available idle
TSP


= Links =
= Links =

Revision as of 11:45, 20 February 2023

DarkSide

The DarkSide-20k data acquisition is to operate in a triggerless mode. Meaning that every photodetection unit (PDU) produces a data flow independently from its neighbours and no global decision is invoked for requesting data. Also, the experiment is composed of several thousand channels that need to be processed together in order to apply data filtering and/or data reduction before the final data recording to a storage device. The analysis time for a such large number of channels requires substantial computer processing power. The Time Slice concept is to divide the acquisition time into segments and submit them individually to a dedicated processor. Based on the duration of the time segment, the analysis performance, and the processing power of the Time Slice Processor (TSP), an adequate number of TSPs will be able to handle the continuous data stream from all the detectors. The Time Slice concept is to segment the data-taking period across all the acquisition modules. The collected data for each segment is then presented to a TSP for processing. Once the analysis is complete, the TSP informs an independent application of its availability to process a new Time Slice. Physics events happening across two consecutive Time slices is to be managed properly as the TSPs will not have access to the previous one. This is addressed by duplicating a fraction of the time slice to the next TSP. This overlapping time is to ensure that this extended segment covers the possible boundary events. It corresponds in our case to the maximum electron drift time within the TPC (∼5ms). The default time slice duration is 1 second, meaning that we will have 0.5% duplicate analyzed events in that overlap window. Fig. 3 shows the run time segmentation (abscissa). Each segment is processed by a different TSP. The total number of TSPs must be greater than the average time (in seconds) it takes to process 1 second of data. A large number of PDMs implies that multiple digitizers are at work. Therefore the time segmentation mechanism requires the transmission of a Time Slice Marker (TSM) to all the digitizers in order to ensure a proper segment assembly based on the Time Slice number. The readout of the individual waveform digitizer (WFD) is performed by a dedicated processor Front-End Processor (FEP). For similar processing power issues as for the TSPs, the FEP will handle a subset of WFDs (2), meaning that in our case, 24 FEPs will collect all the digitizer data. Each FEP will have to read out, filter, and assemble the data segments from the WFDs covering the predefined time slice duration. The management of the transmission of the data fragment to individual TSPs is left to the Pool Manager (PM) application. Its role is to receive ”idle” notification from any TSPs (once the previous time slice analysis has been completed) and broadcast to all the FEPs the destination address for the upcoming segment to the next available idle TSP

Links