ID |
Date |
Author |
Topic |
Subject |
758
|
10 May 2011 |
Jianglai Liu | Forum | simple example frontend for V1720 | Hi,
Who has a good example of a frontend program using CAEN V1718 VME-USB bridge and
V1720 FADC? I am trying to set up the DAQ for such a simple system.
I put together a frontend which talks to the VME. However it gets stuck at
"Calibrating" in initialize_equipment().
I'd appreciate some help!
Thanks,
Jianglai |
762
|
24 May 2011 |
Jianglai Liu | Forum | simple example frontend for V1720 | Thanks all for the kind help. This did point me to the right direction. I was now able to make v1720.c as well as my MIDAS frontend (thanks to
Jimmy's example) talking to V1720, and read out the waveform bank.
However the readout values did not seem quite right. I fed in a PMT-like pulse of about 0.1 V and 50 ns wide, with an external trigger just in time.
However, the readout by both v1720.c stand-alone code, and my midas frontend seemed to be flat noise.
I tried to play with the post trigger value, as well as the DAC setting of V1720. None seemed to help.
BTW I tested my V1720 board functionality by using the CAEN windows software (CAENScope and WaveDump). They worked just fine.
Any suggestions? Attached is my modified v1720.c code.
Pierre-Andre Amaudruz wrote: |
Jianglai Liu wrote: | Hi,
Who has a good example of a frontend program using CAEN V1718 VME-USB bridge and
V1720 FADC? I am trying to set up the DAQ for such a simple system.
I put together a frontend which talks to the VME. However it gets stuck at
"Calibrating" in initialize_equipment().
I'd appreciate some help!
Thanks,
Jianglai |
Under the drivers/vme you can find code for the v1720.c (VME access) and ov1720.c
(A2818/A3818 PCIe optical link access). For testing the hardware, we use this code compiled and linked
with MAIN_ENABLE to confirm its functionality. You may want to do the same for your USB. Once this
is under control, the Midas frontend implementation using the same driver shouldn't give you trouble. |
|
Attachment 1: v1720.c
|
/*********************************************************************
Name: v1720.c
Created by: Pierre-A. Amaudruz / K.Olchanski
Contents: V1720 8 ch. 12bit 250Msps
$Id: v1720.c 4728 2010-05-12 05:34:44Z svn $
*********************************************************************/
#include <stdio.h>
#include <stdint.h>
#include <string.h>
//#include "midas.h"
#include "v1720drv.h"
#include "mvmestd.h"
// Buffer organization map for number of samples
uint32_t V1720_NSAMPLES_MODE[11] = { (1<<20), (1<<19), (1<<18), (1<<17), (1<<16), (1<<15)
,(1<<14), (1<<13), (1<<12), (1<<11), (1<<10)};
/*****************************************************************/
/*
Read V1720 register value
*/
static uint32_t regRead(MVME_INTERFACE *mvme, uint32_t base, int offset)
{
mvme_set_am(mvme, MVME_AM_A32);
mvme_set_dmode(mvme, MVME_DMODE_D32);
return mvme_read_value(mvme, base + offset);
}
/*****************************************************************/
/*
Write V1720 register value
*/
static void regWrite(MVME_INTERFACE *mvme, uint32_t base, int offset, uint32_t value)
{
mvme_set_am(mvme, MVME_AM_A32);
mvme_set_dmode(mvme, MVME_DMODE_D32);
mvme_write_value(mvme, base + offset, value);
}
/*****************************************************************/
uint32_t v1720_RegisterRead(MVME_INTERFACE *mvme, uint32_t base, int offset)
{
return regRead(mvme, base, offset);
}
/*****************************************************************/
void v1720_RegisterWrite(MVME_INTERFACE *mvme, uint32_t base, int offset, uint32_t value)
{
regWrite(mvme, base, offset, value);
}
/*****************************************************************/
void v1720_Reset(MVME_INTERFACE *mvme, uint32_t base)
{
regWrite(mvme, base, V1720_SW_RESET, 0);
}
/*****************************************************************/
void v1720_TrgCtl(MVME_INTERFACE *mvme, uint32_t base, uint32_t reg, uint32_t mask)
{
regWrite(mvme, base, reg, mask);
}
/*****************************************************************/
void v1720_ChannelCtl(MVME_INTERFACE *mvme, uint32_t base, uint32_t reg, uint32_t mask)
{
regWrite(mvme, base, reg, mask);
}
/*****************************************************************/
void v1720_ChannelSet(MVME_INTERFACE *mvme, uint32_t base, uint32_t channel, uint32_t what, uint32_t that)
{
uint32_t reg, mask;
if (what == V1720_CHANNEL_THRESHOLD) mask = 0x0FFF;
if (what == V1720_CHANNEL_OUTHRESHOLD) mask = 0x0FFF;
if (what == V1720_CHANNEL_DAC) mask = 0xFFFF;
reg = what | (channel << 8);
printf("base:0x%x reg:0x%x, this:%x\n", base, reg, that);
regWrite(mvme, base, reg, (that & 0xFFF));
}
/*****************************************************************/
uint32_t v1720_ChannelGet(MVME_INTERFACE *mvme, uint32_t base, uint32_t channel, uint32_t what)
{
uint32_t reg, mask;
if (what == V1720_CHANNEL_THRESHOLD) mask = 0x0FFF;
if (what == V1720_CHANNEL_OUTHRESHOLD) mask = 0x0FFF;
if (what == V1720_CHANNEL_DAC) mask = 0xFFFF;
reg = what | (channel << 8);
return regRead(mvme, base, reg);
}
/*****************************************************************/
void v1720_ChannelThresholdSet(MVME_INTERFACE *mvme, uint32_t base, uint32_t channel, uint32_t threshold)
{
uint32_t reg;
reg = V1720_CHANNEL_THRESHOLD | (channel << 8);
printf("base:0x%x reg:0x%x, threshold:%x\n", base, reg, threshold);
regWrite(mvme, base, reg, (threshold & 0xFFF));
}
/*****************************************************************/
void v1720_ChannelOUThresholdSet(MVME_INTERFACE *mvme, uint32_t base, uint32_t channel, uint32_t threshold)
{
uint32_t reg;
reg = V1720_CHANNEL_OUTHRESHOLD | (channel << 8);
printf("base:0x%x reg:0x%x, outhreshold:%x\n", base, reg, threshold);
regWrite(mvme, base, reg, (threshold & 0xFFF));
}
/*****************************************************************/
void v1720_ChannelDACSet(MVME_INTERFACE *mvme, uint32_t base, uint32_t channel, uint32_t dac)
{
uint32_t reg;
reg = V1720_CHANNEL_DAC | (channel << 8);
printf("base:0x%x reg:0x%x, DAC:%x\n", base, reg, dac);
regWrite(mvme, base, reg, (dac & 0xFFFF));
}
/*****************************************************************/
int v1720_ChannelDACGet(MVME_INTERFACE *mvme, uint32_t base, uint32_t channel, uint32_t *dac)
{
uint32_t reg;
int status;
reg = V1720_CHANNEL_DAC | (channel << 8);
*dac = regRead(mvme, base, reg);
reg = V1720_CHANNEL_STATUS | (channel << 8);
status = regRead(mvme, base, reg);
return status;
}
/*****************************************************************/
void v1720_Align64Set(MVME_INTERFACE *mvme, uint32_t base)
{
regWrite(mvme, base, V1720_VME_CONTROL, V1720_ALIGN64);
}
/*****************************************************************/
void v1720_AcqCtl(MVME_INTERFACE *mvme, uint32_t base, uint32_t operation)
{
uint32_t reg;
reg = regRead(mvme, base, V1720_ACQUISITION_CONTROL);
switch (operation) {
case V1720_RUN_START:
regWrite(mvme, base, V1720_ACQUISITION_CONTROL, (reg | 0x4));
break;
case V1720_RUN_STOP:
regWrite(mvme, base, V1720_ACQUISITION_CONTROL, (reg & ~(0x4)));
break;
case V1720_REGISTER_RUN_MODE:
regWrite(mvme, base, V1720_ACQUISITION_CONTROL, (reg & ~(0x3)));
break;
case V1720_SIN_RUN_MODE:
regWrite(mvme, base, V1720_ACQUISITION_CONTROL, (reg | 0x01));
break;
case V1720_SIN_GATE_RUN_MODE:
regWrite(mvme, base, V1720_ACQUISITION_CONTROL, (reg | 0x02));
break;
case V1720_MULTI_BOARD_SYNC_MODE:
regWrite(mvme, base, V1720_ACQUISITION_CONTROL, (reg | 0x03));
break;
case V1720_COUNT_ACCEPTED_TRIGGER:
regWrite(mvme, base, V1720_ACQUISITION_CONTROL, (reg | 0x08));
break;
case V1720_COUNT_ALL_TRIGGER:
regWrite(mvme, base, V1720_ACQUISITION_CONTROL, (reg & ~(0x08)));
break;
default:
break;
}
}
/*****************************************************************/
void v1720_info(MVME_INTERFACE *mvme, uint32_t base, int *nchannels, uint32_t *n32word)
{
int i, chanmask;
// Evaluate the event size
// Number of samples per channels
*n32word = V1720_NSAMPLES_MODE[regRead(mvme, base, V1720_BUFFER_ORGANIZATION)];
// times the number of active channels
chanmask = 0xff & regRead(mvme, base, V1720_CHANNEL_EN_MASK);
*nchannels = 0;
for (i=0;i<8;i++) {
if (chanmask & (1<<i))
*nchannels += 1;
}
*n32word *= *nchannels;
*n32word /= 2; // 2 samples per 32bit word
*n32word += 4; // Headers
}
/*****************************************************************/
uint32_t v1720_BufferOccupancy(MVME_INTERFACE *mvme, uint32_t base, uint32_t channel)
{
uint32_t reg;
reg = V1720_BUFFER_OCCUPANCY + (channel<<16);
return regRead(mvme, base, reg);
}
/*****************************************************************/
uint32_t v1720_BufferFree(MVME_INTERFACE *mvme, uint32_t base, int nbuffer)
{
int mode;
mode = regRead(mvme, base, V1720_BUFFER_ORGANIZATION);
if (nbuffer <= (1<< mode) ) {
regWrite(mvme, base, V1720_BUFFER_FREE, nbuffer);
return mode;
} else
return mode;
}
/*****************************************************************/
uint32_t v1720_BufferFreeRead(MVME_INTERFACE *mvme, uint32_t base)
{
return regRead(mvme, base, V1720_BUFFER_FREE);
}
/*****************************************************************/
uint32_t v1720_DataRead(MVME_INTERFACE *mvme, uint32_t base, uint32_t *pdata, uint32_t n32w)
{
uint32_t i;
for (i=0;i<n32w;i++) {
*pdata = regRead(mvme, base, V1720_EVENT_READOUT_BUFFER);
if (*pdata != 0xffffffff)
pdata++;
else
break;
}
return i;
}
/********************************************************************/
/** v1720_DataBlockRead
Read N entries (32bit)
@param mvme vme structure
@param base base address
@param pdest Destination pointer
@return nentry
*/
uint32_t v1720_DataBlockRead(MVME_INTERFACE *mvme, uint32_t base, uint32_t *pdest, uint32_t *nentry)
{
int status;
mvme_set_am( mvme, MVME_AM_A32);
mvme_set_dmode( mvme, MVME_DMODE_D32);
//mvme_set_blt( mvme, MVME_BLT_MBLT64);
//mvme_set_blt( mvme, MVME_BLT_NONE);
mvme_set_blt(mvme, MVME_BLT_BLT32);
//mvme_set_blt( mvme, 0);
// Transfer in MBLT64 (8bytes), nentry is in 32bits(VF48)
// *nentry * 8 / 2
status = mvme_read(mvme, pdest, base+V1720_EVENT_READOUT_BUFFER, 4);
//printf("status = %d\n",status);
if (status != MVME_SUCCESS)
return 0;
return (*nentry);
}
/*****************************************************************/
void v1720_Status(MVME_INTERFACE *mvme, uint32_t base)
{
printf("================================================\n");
printf("V1720 at A32 0x%x\n", (int)base);
printf("Board ID : 0x%x\n", regRead(mvme, base, V1720_BOARD_ID));
printf("Board Info : 0x%x\n", regRead(mvme, base, V1720_BOARD_INFO));
printf("Acquisition status : 0x%8.8x\n", regRead(mvme, base, V1720_ACQUISITION_STATUS));
printf("================================================\n");
}
/*****************************************************************/
/**
Sets all the necessary paramters for a given configuration.
The configuration is provided by the mode argument.
Add your own configuration in the case statement. Let me know
your setting if you want to include it in the distribution.
- <b>Mode 1</b> :
@param *mvme VME structure
@param base Module base address
@param mode Configuration mode number
@return 0: OK. -1: Bad
*/
... 169 more lines ...
|
1081
|
29 Jul 2015 |
Javier Praena | Forum | error | Hello, I am new in the forum. We are running an experiment for a week with no
problems. Now we add a detector a we found an error. Even we come back to our
previous configuration the error continues appearing. Please, may someone help
us? You can find the error in the attachment. Thanks! |
Attachment 1: sigsegv-error.jpg
|
|
9
|
10 Mar 2004 |
Jan Wouters | | Creation of secondary Midas output file. | Dear Midas Team,
I have run into a problem with Midas and was wondering if you could explain what I
am doing wrong. I have included a simple demo to illustrate what I am doing and
can send a small input data file if needed.
WHAT I AM TRYING TO DO:
Every midas event for the DANCE experiment consists of many physics events. I am
trying to create a secondary mid file where the event boundaries are now the
physics events rather than the midas events. This secondary mid file will be
analyzed using a second stage midas analyzer.
For the demo, I use the data from EV02 (one of our 15 frontends), which consists of a
variable number of fixed length structures where each structure contains the data for
one crystal from the DANCE detector.
I treat each crystal as a separate physics event and write it out in the TREK bank,
which is a demo calculated output bank, as a separate event.
(The only difference between this demo and our real system is that we would include
all the crystals from the other frontends that have approximately the same time stamp
in the output bank. Thus the output bank would consist of a varing number of
crystals in one event rather than the fixed one crystal per event used in this demo.)
THE CHANGES TO analyzer.c AND adccalib.c
I loop through the EV02 bank examining each crystal structure in turn. I calculate
"calibrated" parameters and put them into an output bank called TREK. The unusual
part of this example is that the TREK bank is no longer part of the main list of input
banks, ana_trigger_bank_list[]. Instead it is now part of a new bank list called
ana_physics_bank_list[]. See the analyzer.c file for this definition.
In adccalib.c I create the space for this new bank as follows.
EVENT_HEADER gPhysicsEventHeaders[ MAX_EVENT_SIZE / sizeof(
EVENT_HEADER ) ];
WORD* gPhysicsEventData = ( WORD * )( gPhysicsEventHeaders + 1 );
In the adc_calib routine I create the bank header as follows. Note that the serial
numbers will restart at 0 at the beginning of each midas event. Should I let the serial
number increment monotonically until the end of the run?:
gPhysicsEventHeaders->serial_number = (DWORD) - 1;
gPhysicsEventHeaders->event_id = 2;
gPhysicsEventHeaders->trigger_mask = 0;
gPhysicsEventHeaders->time_stamp = pheader->time_stamp;
In a loop that loops through all the crystals contained in EV02, I extract each crystal,
calibrate it, and store it in a TREK structure. In creating the TREK bank I assume that
each one will be a separate physics event thus I update the event serial number and
use bk_init32 to initialize the memory.
for ( short i = 0; i < nItems; i++ )
{ ++(gPhysicsEventHeaders->serial_number); // Update serial number.
bk_init32( gPhysicsEventData ); // Initialize storage.
bk_create( gPhysicsEventData, "TREK", TID_STRUCT, &trek );
trek->one = (double) pev->areahg * 1.0;
trek->two = (float) pev->timelo * 1.0;
bk_close( gPhysicsEventData, trek+1 );
pev++; // Loop to next crystal's data.
}
The output bank should consist of multiple events for each individual EV02 midas
input event.
As far as I can tell the code compiles and runs fine, but I get no data in the .mid
output file except for the ODB. I have a print statement at the beginning of each
midas event stating how many crystals were found in the EV02 bank. I also print out
the calibrated value for each crystal as it is being placed in its own TREK output
bank. The data appears correct.
I cannot place TREK in the input bank the way it normally is done in the examples
because there is not a one-to-one correspondence between a midas event and a
true physics event. Instead one midas event has many physics events. Thus the
output bank needs to be in a new memory area so that I can create a custom header
and increment the serial number properly for each event. Our follow-on analysis
using a second Midas analyzer only needs to analyze one physics event at a time
rather than one Midas event at a time, which is why we are going to all the trouble to
get this paradigm working.
I include all the code for this very simple example.
RUNNING THE CODE:
To run the example just use the run01220.mid file I will send:
./analyzer -i run01220.mid.gz -o run01220out.mid -c settings.odb_cfg -n 50
The only thing done by the settings.odb_cfg file is to turn on the TREK output bank. I
have verified that the bank is on.
SUMMARY:
I believe that I must not be creating the new TREK output bank correctly so that
midas understands that the event-by-event calculated physics data should be written
out event-by-event. I have pointed out several places in the above discussion where
I might be making a mistake.
I would like to get both this example running and a similar which create Root trees,
though the Root trees are of secondary importance. With this example I can finish
writing the second stage analyzer and get the DANCE collaboration moving forward
with their analysis. Currently, we cannot use this paradigm because I cannot create
a secondary mid file in our stage one analysis. I would be very grateful if you could
take a look at this example and tell me what I am doing incorrectly.
Jan |
Attachment 1: dance193.tar
|
172
|
04 Nov 2004 |
Jan Wouters | Forum | Frontend code and the ODB | I would like to know whether all parameters used by the frontend code have to be in the "Experiment/
Run Parameters" section. This section can become big and difficult to maintain, because it is one single
big section of experim.h (EXP_PARAM_DEFINED). I have parameters the various frontends read at the
beginning of each run, which set the hardware settings of various devices. I would like to place these in
a section all their own, organized by device. Is this doable? |
181
|
14 Dec 2004 |
Jan Wouters | Forum | Frontend index | What is the api call to determine the index of the frontend when specifying the
-i parameter during execution of the frontend? |
187
|
16 Dec 2004 |
Jan Wouters | Forum | cm_msg | Could someone please explain to me how cm_msg, cm_msg1, etc. all work. The
documentation is very terse.
I want to setup a fairly significant set of debugging, and error messages for a
new frontend. I need to get these messages to a logging file. I also would
like to get the error messages to the user through whatever interface Midas
normally uses for error reporting.
Jan |
2303
|
19 Nov 2021 |
Jacob Thorne | Forum | Sequencer error with ODB Inc | Hi,
I am having problems with the midas sequencer, here is my code:
1 COMMENT "Example to move a Standa stage"
2 RUNDESCRIPTION "Example movement sequence - each run is one position of a single stage
3
4 PARAM numRuns
5 PARAM sequenceNumber
6 PARAM RunNum
7
8 PARAM positionT2
9 PARAM deltapositionT2
10
11 ODBSet "/Runinfo/Run number", $RunNum
12 ODBSet "/Runinfo/Sequence number", $sequenceNumber
13
14 ODBSet "/Equipment/Neutron Detector/Settings/Detector/Type of Measurement", 2
15 ODBSet "/Equipment/Neutron Detector/Settings/Detector/Number of Time Bins", 10
16 ODBSet "/Equipment/Neutron Detector/Settings/Detector/Number of Sweeps", 1
17 ODBSet "/Equipment/Neutron Detector/Settings/Detector/Dwell Time", 100000
18
19 ODBSet "/Equipment/MTSC/Settings/Devices/Stage 2 Translation/Device Driver/Set Position", $positionT2
20
21 LOOP $numRuns
22 WAIT ODBvalue, "/Equipment/MTSC/Settings/Devices/Stage 2 Translation/Ready", ==, 1
23 TRANSITION START
24 WAIT ODBvalue, "/Equipment/Neutron Detector/Statistics/Events sent", >=, 1
25 WAIT ODBvalue, "/Runinfo/State", ==, 1
26 WAIT ODBvalue, "/Runinfo/Transition in progress", ==, 0
27 TRANSITION STOP
28 ODBInc "/Equipment/MTSC/Settings/Devices/Stage 2 Translation/Device Driver/Set Position", $deltapositionT2
29
30 ENDLOOP
31
32 ODBSet "/Runinfo/Sequence number", 0
The issue comes with line 28, the ODBInc does not work, regardless of what number I put I get the following error:
[Sequencer,ERROR] [odb.cxx:7046:db_set_data_index1,ERROR] "/Equipment/MTSC/Settings/Devices/Stage 2 Translation/Device Driver/Set Position" invalid element data size 32, expected 4
I don't see why this should happen, the format is correct and the number that I input is an int.
Sorry if this is a basic question.
Jacob |
2825
|
05 Sep 2024 |
Jack Carlton | Forum | Python frontend rate limitations? | I'm trying to get a sense of the rate limitations of a python frontend. I
understand this will vary from system to system.
I adapted two frontends from the example templates, one in C++ and one in python.
Both simply fill a midas bank with a fixed length array of zeros at a given polled
rate. However, the C++ frontend is about 100 times faster in both data and event
rates. This seems slow, even for an interpreted language like python. Furthermore,
I can effectively increase the maximum rate by concurrently running a second
python frontend (this is not the case for the C++ frontend). In short, there is
some limitation with using python here unrelated to hardware.
In my case, poll_func appears to be called at 100Hz at best. What limits the rate
that poll_func is called in a python frontend? Is there a more appropriate
solution for increasing the python frontend data/event rate than simply launching
more frontends?
I've attached my C++ and python frontend files for reference.
Thanks,
Jack |
Attachment 1: frontend.py
|
import midas
import midas.frontend
import midas.event
import numpy as np
import random
import time
class DataSimulatorEquipment(midas.frontend.EquipmentBase):
def __init__(self, client, frontend):
equip_name = "Python Data Simulator"
default_common = midas.frontend.InitialEquipmentCommon()
default_common.equip_type = midas.EQ_POLLED
default_common.buffer_name = "SYSTEM"
default_common.trigger_mask = 0
default_common.event_id = 2
default_common.period_ms = 100
default_common.read_when = midas.RO_RUNNING
default_common.log_history = 1
midas.frontend.EquipmentBase.__init__(self, client, equip_name, default_common)
print("Initialization complete")
self.set_status("Initialized")
self.frontend = frontend
def readout_func(self):
event = midas.event.Event()
# Create a bank for zero buffer
event.create_bank("CR00", midas.TID_SHORT, self.frontend.zero_buffer)
# Simulate the addition of `data` in the periodic event
'''
data_block = []
data_block.extend(self.frontend.data)
# Append the simulated data to the event
event.create_bank("CR00", midas.TID_SHORT, data_block)
'''
return event
def poll_func(self):
current_time = time.time()
if current_time - self.frontend.last_poll_time >= self.frontend.poll_time:
self.frontend.last_poll_time = current_time
self.frontend.poll_count += 1
self.frontend.poll_timestamps.append(current_time)
return True # Indicate that an event is available
return False # No event available yet
class DataSimulatorFrontend(midas.frontend.FrontendBase):
def __init__(self):
midas.frontend.FrontendBase.__init__(self, "DataSimulator-Python")
# Data and zero buffer initialization
self.data = []
self.zero_buffer = []
self.generator = random.Random()
self.total_data_size = 1250000
self.load_data_from_file("fake_data.txt")
self.init_zero_buffer()
# Polling variables
self.poll_time = 0.001 # Poll time in seconds
self.last_poll_time = time.time()
self.poll_count = 0
self.poll_timestamps = []
self.add_equipment(DataSimulatorEquipment(self.client, self))
def load_data_from_file(self, filename):
try:
with open(filename, 'r') as file:
for line in file:
values = [int(value) for value in line.strip().split(',')]
self.data.extend(values)
print(f"Loaded data from {filename}: {self.data[:10]}...") # Display the first few values for verification
except IOError as e:
print(f"Error opening file: {e}")
def init_zero_buffer(self):
self.zero_buffer = [0] * self.total_data_size
print(f"Initialized zero buffer with {self.total_data_size } zeros.")
def begin_of_run(self, run_number):
self.set_all_equipment_status("Running", "greenLight")
self.client.msg(f"Frontend has started run number {run_number}")
return midas.status_codes["SUCCESS"]
def end_of_run(self, run_number):
self.set_all_equipment_status("Finished", "greenLight")
self.client.msg(f"Frontend has ended run number {run_number}")
# Print poll function statistics at the end of the run
self.print_poll_stats()
return midas.status_codes["SUCCESS"]
def frontend_exit(self):
print("Frontend is exiting.")
def print_poll_stats(self):
if len(self.poll_timestamps) > 1:
intervals = [self.poll_timestamps[i] - self.poll_timestamps[i-1] for i in range(1, len(self.poll_timestamps))]
avg_interval = sum(intervals) / len(intervals)
print(f"Poll function was called {self.poll_count} times.")
print(f"Average interval between poll calls: {avg_interval:.6f} seconds")
else:
print(f"Poll function was called {self.poll_count} times. Not enough data for interval calculation.")
if __name__ == "__main__":
with DataSimulatorFrontend() as my_fe:
my_fe.run()
|
Attachment 2: frontend.cxx
|
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <string.h>
#include <iostream>
#include <fstream>
#include <sstream>
#include <vector>
#include "midas.h"
#include "mfe.h"
#include <stdlib.h> // Include the header for rand()
#include <random> // Include for random number generation
void trigger_update(INT, INT, void*);
/*-- Globals -------------------------------------------------------*/
/* The frontend name (client name) as seen by other MIDAS clients */
const char *frontend_name = "DataSimulator";
/* The frontend file name, don't change it */
const char *frontend_file_name = __FILE__;
/* frontend_loop is called periodically if this variable is TRUE */
BOOL frontend_call_loop = FALSE;
/* a frontend status page is displayed with this frequency in ms */
INT display_period = 1000;
/* maximum event size produced by this frontend */
INT max_event_size = 1024 * 1014;
/* maximum event size for fragmented events (EQ_FRAGMENTED) */
INT max_event_size_frag = 5 * max_event_size;
/* buffer size to hold events */
INT event_buffer_size = 5 * max_event_size;
// Define a vector to store 16-bit words
std::vector<int16_t> data; // Define a global vector to store 16-bit signed integers
// Global variable to keep track of the last poll time
std::chrono::steady_clock::time_point last_poll_time;
const std::chrono::microseconds polling_interval(300); // Poll every 300 microsecond
// Random number generator for generating data
std::mt19937 generator;
std::uniform_int_distribution<short> distribution(-32768, 32767); // Define the range of random values (short range)
// Global variable to hold the zero buffer
std::vector<short> zero_buffer;
/*-- Function declarations -----------------------------------------*/
INT frontend_init(void);
INT frontend_exit(void);
INT begin_of_run(INT run_number, char *error);
INT end_of_run(INT run_number, char *error);
INT pause_run(INT run_number, char *error);
INT resume_run(INT run_number, char *error);
INT frontend_loop(void);
INT read_trigger_event(char *pevent, INT off);
INT read_periodic_event(char *pevent, INT off);
INT poll_event(INT source, INT count, BOOL test);
INT interrupt_configure(INT cmd, INT source, POINTER_T adr);
/*-- Equipment list ------------------------------------------------*/
BOOL equipment_common_overwrite = TRUE;
EQUIPMENT equipment[] = {
{"Data Simulator", /* equipment name */
{2, 0, /* event ID, trigger mask */
"SYSTEM", /* event buffer */
EQ_POLLED, /* equipment type */
0, /* event source */
"MIDAS", /* format */
TRUE, /* enabled */
RO_RUNNING | RO_TRANSITIONS | /* read when running and on transitions */
RO_ODB, /* and update ODB */
10, /* read every sec */
0, /* stop run after this event limit */
0, /* number of sub events */
TRUE, /* log history */
"", "", "",},
read_trigger_event /* readout routine */
},
{""}
};
/*-- Trigger Update ------------------------------------------------*/
void trigger_update(INT hDB, INT hkey,void*)
{
}
/*-- Frontend Init -------------------------------------------------*/
int frontend_init() {
// Open the file for reading
std::ifstream inputFile("fake_data.txt");
if (!inputFile) {
std::cerr << "Error opening the file." << std::endl;
return 1;
}
std::cout << "Reading and converting data:" << std::endl;
std::string line;
while (std::getline(inputFile, line)) {
std::istringstream iss(line);
std::string token;
while (std::getline(iss, token, ',')) {
int16_t value;
std::istringstream(token) >> value;
data.push_back(value);
}
}
// Print the converted data
for (int i = 0; i < data.size(); i++) {
std::cout << " " << data[i];
}
// Close the file
inputFile.close();
if (data.empty()) {
std::cerr << "No data was converted." << std::endl;
} else {
std::cout << std::endl << "Conversion completed." << std::endl;
}
// Initialize random number generator
std::random_device rd; // Obtain a random number from hardware
generator = std::mt19937(rd()); // Seed the generator
// Define the total number of zero data points
const int total_data_size = 50000; // Adjust size as needed
// Create and initialize the buffer of zeros
zero_buffer.resize(total_data_size, 0);
return SUCCESS;
}
/*-- Frontend Exit -------------------------------------------------*/
INT frontend_exit()
{
return SUCCESS;
}
/*-- Begin of Run --------------------------------------------------*/
INT begin_of_run(INT run_number, char *error)
{
return SUCCESS;
}
/*-- End of Run ----------------------------------------------------*/
INT end_of_run(INT run_number, char *error)
{
return SUCCESS;
}
/*-- Pause Run -----------------------------------------------------*/
INT pause_run(INT run_number, char *error)
{
return SUCCESS;
}
/*-- Resume Run ----------------------------------------------------*/
INT resume_run(INT run_number, char *error)
{
return SUCCESS;
}
/*-- Frontend Loop -------------------------------------------------*/
INT frontend_loop()
{
/* if frontend_call_loop is true, this routine gets called when
the frontend is idle or once between every event */
return SUCCESS;
}
/*------------------------------------------------------------------*/
/********************************************************************\
Readout routines for different events
\********************************************************************/
/*-- Trigger event routines ----------------------------------------*/
INT poll_event(INT source, INT count, BOOL test) {
// Get the current time
auto now = std::chrono::steady_clock::now();
// Check if enough time has passed since the last poll
if (now - last_poll_time >= polling_interval) {
// Update the last poll time
last_poll_time = now;
// Return TRUE to indicate that an event is available
return TRUE;
}
// If test is TRUE, don't return anything
if (test) {
return FALSE;
}
// Otherwise, return FALSE to indicate no event available
return FALSE;
}
/*-- Interrupt configuration ---------------------------------------*/
INT interrupt_configure(INT cmd, INT source, POINTER_T adr)
{
switch (cmd) {
case CMD_INTERRUPT_ENABLE:
break;
case CMD_INTERRUPT_DISABLE:
break;
case CMD_INTERRUPT_ATTACH:
break;
case CMD_INTERRUPT_DETACH:
break;
}
return SUCCESS;
}
/*-- Event readout -------------------------------------------------*/
INT read_trigger_event(char *pevent, INT off)
{
short *pdata;
// Init bank structure
bk_init32(pevent);
// Create a bank named "CR00" and specify the data type as TID_SHORT
bk_create(pevent, "CR00", TID_SHORT, (void **)&pdata);
// Use memcpy to copy the buffer of zeros into the MIDAS bank
memcpy(pdata, zero_buffer.data(), zero_buffer.size() * sizeof(short));
// Adjust pdata pointer
pdata += zero_buffer.size(); // Move the pointer past the copied data
// Close the bank
bk_close(pevent, pdata);
return bk_size(pevent);
}
/*-- Periodic event ------------------------------------------------*/
INT read_periodic_event(char *pevent, INT off)
{
short *pdata; // Change the data type to short
// Init bank structure
bk_init32(pevent);
// Create a bank named "CR00" and specify the data type as TID_SHORT
bk_create(pevent, "CR00", TID_SHORT, (void **)&pdata);
// Repeat the loop 5000 times
for (int repeat = 0; repeat < 400; repeat++) {
for (int i = 0; i < data.size(); i++) {
*pdata++ = data[i];
}
}
// Close the bank
bk_close(pevent, pdata);
return bk_size(pevent);
}
|
Draft
|
05 Sep 2024 |
Jack Carlton | Forum | Python frontend rate limitations? |
Thank you, this was very helpful.
> First the general advice: if you reduce the "period" of your equipment, then your function will get called more frequently. You can set it to 0 and we'll call it as often as possible. You can set this in the ODB at "/Equipment/Python Data Simulator/Common/Period"
Thanks, I thought that was just for periodic triggering (or at least that's how I've used it in C++ frontends). Changing this allowed me to get past the 100Hz event rate cap I described.
> If that's still not fast enough, then you can return a *list* of events from your readout_func. I've seen real-world cases of 25kHz+ of midas events generated in this fashion.
>
>
> However in your case the limitation is likely that you're sending 1.25MB per event and we have a lot of data marshalling to do between the python and C++ layer. In particular it takes 15ms on my machine to just pack the data into a memory buffer (see timeit command below). I am sure there must be a faster way to do this packing, especially in the case where the bank contains a numpy array rather than a python list.
>
> I'll add it to my to-do list to investigate improving the performance of medium-to-large events in the python code.
>
>
> Cheers,
> Ben
> P.S. You may have a bug in your calculations (depending on how you did your testing). In poll_func I think you should be updating the stats every time the function is called, not just the times when you return True.
I had tested the way you described at first, then later changed
> P.P.S. Command I used to test how slow it is to pack the data. One-time setup of creating the buffers, then multiple tests of the pack_into function:
>
> python -m timeit -s "import struct;import ctypes;arr = [0]*1250001;buf = ctypes.create_string_buffer(10000000);fmt = \">1250000d\"" "struct.pack_into(fmt, buf, *arr)"
> 20 loops, best of 5: 15.3 msec per loop |
2829
|
06 Sep 2024 |
Jack Carlton | Forum | Python frontend rate limitations? | Thanks for the responses, they were very helpful.
>First the general advice: if you reduce the "period" of your equipment, then your function will get called more frequently. You can set it to 0 and we'll
call it as often as possible.
Thanks, this solves the event rate limitation I described. I didn't think to change this because the "period" did not affect the observed rate in C (and now
I know why thanks to Stefan).
A couple more questions:
1.
For me,
python -m timeit -s "import struct;import ctypes;arr = [0]*1250001;buf = ctypes.create_string_buffer(10000000);fmt = \">1250000d\"" "struct.pack_into(fmt,
buf, *arr)"
10 loops, best of 3: 43.7 msec per loop
which suggests my maximum data rate is about 1.25 MB * 1000/43.7 Hz = 23 MB/s (?). But I see data rates up to 60 MB/s with a python frontend. Am I
misinterpreting the meaning of this result?
2. I can effectively bypass the rate limitations in python by running two concurrent frontends. For example, with one python frontend at best I can generate
60 MB/s of data (setting "period" to 0 now); but with two frontends I can double this to 120 MB/s. This implies one python frontend is not bottlenecked by
hardware limitations in my case.
Am I doing something wrong to artificially bottleneck my frontends? Perhaps there's a multi-threading solution I can implement to avoid needing multiple
frontends?
Thanks,
Jack |
2885
|
05 Nov 2024 |
Jack Carlton | Forum | How to properly write a client listens for events on a given buffer? | If there's some template for writing a client to access event data, that would be
very useful (and you can probably just ignore the context I gave below in that
case).
Some context:
Quite a while ago, I wrote the attached "data pipeline" client whose job was to
listen for events, copy their data, and pipe them to a python script. I believe I
just stole bits and pieces from mdump.cxx to accomplish this. Later I wrote the
attached wrapper class "MidasConnector.cpp" and a main.cpp to generalize
data_pipeline.cxx a bit. There were a lot of iterations to the code where I had the
below problems; so don't take the logic in the attached code as the exact code that
caused the issues below.
However, I'm unable to resolve a couple issues:
1. If a timeout is set, everything will work until that timeout is reached. Then
regardless of what kind of logic I tried to implement (retry receiving event,
disconnect and reconnect client, etc.) the client would refuse to receive more data.
2. When I ctrl-C main, it hangs; this is expected because it's stuck in a while
loop. But because I can't set a timeout I have to ctrl-C twice; this would
occasionally corrupt the ODB which was not ideal. I was able to get around this with
some impractical solution involving ncurses I believe.
Thanks,
Jack |
Attachment 1: data_pipeline_(2).cxx
|
#include "midas.h"
#include "msystem.h"
#include "mrpc.h"
#include "mdsupport.h"
#include <iostream>
#include <unistd.h>
#include <stdio.h> // Added for popen
#include <stdlib.h> // Added for malloc and free
INT hBufEvent;
void process_event(EVENT_HEADER *pheader) {
printf("Received event #%d\n", pheader->serial_number);
printf("Event ID: %d\n", pheader->event_id);
printf("Data Size: %d bytes\n", pheader->data_size);
printf("Timestamp: %d\n", pheader->time_stamp);
printf("Trigger mask: %d\n", pheader->trigger_mask);
// Print a marker to indicate the start of serialized data
printf("EVENT_DATA_START\n");
// Serialize and print the event data
int* eventData = (int*)((char*)pheader + sizeof(EVENT_HEADER));
int numIntegers = (pheader->data_size - sizeof(EVENT_HEADER)) / sizeof(int);
for (int i = 0; i < 8; ++i) {
printf("%d ", eventData[i]);
}
printf("\n");
// Process the event here
}
int main() {
HNDLE hDB, hKey;
char host_name[HOST_NAME_LENGTH], expt_name[NAME_LENGTH], str[80];
char buf_name[32] = EVENT_BUFFER_NAME, rep_file[128];
unsigned int status, start_time, stop_time;
INT ch, request_id, size, get_flag, action, single, i;
// Define the maximum event size you expect to receive
INT max_event_size = 4000;
// Allocate memory for storing event data dynamically
void* event_data = malloc(max_event_size);
printf("1\n");
/* Get if existing the pre-defined experiment */
cm_get_environment(host_name, sizeof(host_name), expt_name, sizeof(expt_name));
// Print host_name
printf("host_name = %s\n", host_name);
// Print expt_name
printf("expt_name = %s\n", expt_name);
printf("2\n");
/* connect to the experiment */
status = cm_connect_experiment(host_name, expt_name, "data_pipeline", 0);
if (status != CM_SUCCESS) {
return 1;
}
printf("3\n");
status = bm_open_buffer(buf_name, DEFAULT_BUFFER_SIZE, &hBufEvent);
if (status != BM_SUCCESS && status != BM_CREATED) {
cm_msg(MERROR, "data_pipeline", "Cannot open buffer \"%s\", bm_open_buffer() status %d", buf_name, status);
return 1;
}
printf("4\n");
/* set the buffer cache size if requested */
bm_set_cache_size(hBufEvent, 100000, 0);
printf("5\n");
/* place a request for a specific event id */
status = bm_request_event(hBufEvent, EVENTID_ALL, TRIGGER_ALL, GET_ALL, &request_id, NULL); // Use NULL as the callback routine
printf("6\n");
printf("status = %d\n",status);
// Open a pipe to a Python script for data transfer
FILE* pipe = popen("python3 data_pipeline.py", "w");
if (pipe == NULL) {
perror("popen");
return 1;
}
// Enter the event processing loop
while (1) {
// Use the address of max_event_size in bm_receive_event
status = bm_receive_event(hBufEvent, event_data, &max_event_size, BM_WAIT); // Wait for new data indefinitely
if (status == BM_SUCCESS) {
//process_event((EVENT_HEADER*)((char*)event_data + sizeof(EVENT_HEADER)));
// Send the event data to the Python script via the pipe
fprintf(pipe, "EVENT_DATA_START\n");
int* eventData = (int*)((char*)event_data + sizeof(EVENT_HEADER));
int numIntegers = (max_event_size - sizeof(EVENT_HEADER)) / sizeof(int);
for (int i = 4; i < 12; ++i) {
fprintf(pipe, "%d ", eventData[i]);
}
fprintf(pipe, "\n");
fflush(pipe); // Flush the buffer to ensure data is sent immediately
} else {
printf("Error receiving event: %d\n", status);
break; // Exit the loop if an error occurs
}
}
// Close the pipe
pclose(pipe);
// Free the dynamically allocated memory
free(event_data);
cm_disconnect_experiment();
printf("7\n");
return 1;
}
|
Attachment 2: MidasConnector.cpp
|
#include "MidasConnector.h"
MidasConnector::MidasConnector(const char* clientName) {
// Initialize client name
strncpy(client_name_, clientName, NAME_LENGTH);
// Get host name and experiment name from environment
cm_get_environment(host_name_, sizeof(host_name_), experiment_name_, sizeof(experiment_name_));
// Initialize other private variables if needed
event_id = EVENTID_ALL; // Initialize with default value
trigger_mask = TRIGGER_ALL; // Initialize with default value
sampling_type = GET_ALL; // Initialize with default value (renamed from get_flags)
buffer_size = DEFAULT_BUFFER_SIZE; // Initialize with default value
timeout_millis = BM_WAIT;
strncpy(buffer_name, EVENT_BUFFER_NAME, sizeof(buffer_name)); // Initialize with default value
}
// Getters for the private variables
short MidasConnector::getEventId() const {
return event_id;
}
short MidasConnector::getTriggerMask() const {
return trigger_mask;
}
int MidasConnector::getSamplingType() const {
return sampling_type;
}
int MidasConnector::getBufferSize() const {
return buffer_size;
}
const char* MidasConnector::getBufferName() const {
return buffer_name;
}
int MidasConnector::getTimeout() const {
return timeout_millis;
}
HNDLE MidasConnector::getEventBufferHandle() const {
return hBufEvent;
}
// Setters for the private variables
void MidasConnector::setEventId(short eventId) {
event_id = eventId;
}
void MidasConnector::setTriggerMask(short triggerMask) {
trigger_mask = triggerMask;
}
void MidasConnector::setSamplingType(int samplingType) {
sampling_type = samplingType;
}
void MidasConnector::setBufferSize(int bufferSize) {
buffer_size = bufferSize;
}
void MidasConnector::setBufferName(const char* bufferName) {
strncpy(buffer_name, bufferName, sizeof(buffer_name));
}
void MidasConnector::setTimeout(int timeoutMillis) {
timeout_millis = timeoutMillis;
}
void MidasConnector::setEventBufferHandle(HNDLE eventBufferHandle) {
hBufEvent = eventBufferHandle;
}
bool MidasConnector::ConnectToExperiment() {
// Connect to the experiment
int status = cm_connect_experiment(host_name_, experiment_name_, client_name_, NULL);
if (status != CM_SUCCESS) {
// Handle connection error
return false;
}
return true;
}
void MidasConnector::DisconnectFromExperiment() {
// Disconnect from the experiment
cm_disconnect_experiment();
}
bool MidasConnector::OpenEventBuffer() {
int status = bm_open_buffer(buffer_name, buffer_size, &hBufEvent);
if (status != BM_SUCCESS && status != BM_CREATED) {
cm_msg(MERROR, client_name_, "Cannot open buffer \"%s\", bm_open_buffer() status %d", buffer_name, status);
return false;
}
return true;
}
bool MidasConnector::SetCacheSize(int cacheSize) {
bm_set_cache_size(hBufEvent, cacheSize, 0);
return true;
}
bool MidasConnector::RequestEvent() {
int request_id;
int status = bm_request_event(hBufEvent, event_id, trigger_mask, sampling_type, &request_id, NULL);
return status == BM_SUCCESS;
}
bool MidasConnector::ReceiveEvent(void* eventBuffer, int& maxEventSize) {
int status = bm_receive_event(hBufEvent, eventBuffer, &maxEventSize, timeout_millis);
return status == BM_SUCCESS;
}
|
Attachment 3: main.cpp
|
#include "event_processor/EventProcessor.h"
#include "data_transmitter/DataTransmitter.h"
#include "midas_connector/MidasConnector.h"
#include "json.hpp"
#include <fstream>
INT hBufEvent1;
INT hBufEvent2;
// Function to initialize MIDAS and open an event buffer
bool initializeMidas(MidasConnector& midasConnector, const nlohmann::json& config) {
// Set the MidasConnector properties based on the config
midasConnector.setEventId(config["eventId"].get<short>());
midasConnector.setTriggerMask(config["triggerMask"].get<short>());
midasConnector.setSamplingType(config["samplingType"].get<int>());
midasConnector.setBufferSize(config["bufferSize"].get<int>());
midasConnector.setBufferName(config["bufferName"].get<std::string>().c_str());
midasConnector.setBufferSize(config["bufferSize"].get<int>());
// Call the ConnectToExperiment method
if (!midasConnector.ConnectToExperiment()) {
return false;
}
// Call the OpenEventBuffer method
if (!midasConnector.OpenEventBuffer()) {
return false;
}
// Set the buffer cache size if requested
midasConnector.SetCacheSize(config["cacheSize"].get<int>());
// Place a request for a specific event id
if (!midasConnector.RequestEvent()) {
return false;
}
return true;
}
int main() {
// Read configuration from a JSON file
nlohmann::json config;
std::ifstream configFile("config.json");
configFile >> config;
configFile.close();
// Initialize MidasConnector and connect to the MIDAS experiment
MidasConnector midasConnector(config["clientName"].get<std::string>().c_str());
if (!initializeMidas(midasConnector, config)) {
printf("Error: Failed to initialize MIDAS.\n");
return 1;
}
// Read the maximum event size from the JSON configuration
INT max_event_size = config["maxEventSize"].get<int>();
// Allocate memory for storing event data dynamically
void* event_data = malloc(max_event_size);
// Initialize EventProcessor with detector mapping file and verbosity flag
EventProcessor eventProcessor(config["detectorMappingFile"].get<std::string>(), config["verbose"].get<bool>());
// Initialize DataTransmitter with the ZeroMQ address
DataTransmitter dataPublisher(config["zmqAddress"].get<std::string>());
// Connect to the ZeroMQ server
if (!dataPublisher.bind()) {
// Handle connection error
printf("Error: Failed to bind to port %s.\n", config["zmqAddress"].get<std::string>().c_str());
return 1;
} else {
printf("Connected to the ZeroMQ server.\n");
}
// Event processing loop
while (true) {
midasConnector.ReceiveEvent(event_data, max_event_size);
//Prcoess data once we have it
eventProcessor.processEvent(event_data, max_event_size);
// Serialize the event data with EventProcessor and store it in serializedData
std::string serializedData = eventProcessor.getSerializedData();
// Send the serialized data to the ZeroMQ server with DataTransmitter
if (!dataPublisher.publish(serializedData)) {
// Handle send error
printf("Error: Failed to send serialized data.\n");
}
}
// Cleanup and finalize your application
midasConnector.DisconnectFromExperiment(); // Disconnect from the MIDAS experiment
return 0;
}
|
1859
|
23 Mar 2020 |
Ivo Schulthess | Forum | Save data to FTP | Dear all
I try to save data to an FTP server but don't get any data on the server. Midas does not complain or message any error but also nothing gets saved. Does somebody have experience with this? I use the following settings for the ODB mlogger channel settings: Type: FTP, Filename: server.com, 21, user, pw, ., run%06d.mid, Format: MIDAS, Output: FILE. What would be the Output: FTP setting for? I tried this but it does not work at all.
Thanks in advance,
Ivo |
1861
|
24 Mar 2020 |
Ivo Schulthess | Forum | Save data to FTP | > > I try to save data to an FTP server but don't get any data on the server. Midas does not complain or message any error but also nothing gets saved. Does somebody have experience with this? I use the following settings for the ODB mlogger channel settings: Type: FTP, Filename: server.com, 21, user, pw, ., run%06d.mid, Format: MIDAS, Output: FILE. What would be the Output: FTP setting for? I tried this but it does not work at all.
>
> Hi, Ivo, good to hear from a midas user in these difficult times.
>
> We do not use FTP at TRIUMF, but Stefan asked us to keep FTP alive and working, so we should be able
> to get you going. I will try to find the FTP instructions for you, I am pretty sure I have them somewhere.
>
> In the mean time, I am very curious why you are using a FTP to record data, is it some kind
> of data appliance where simplest input for data is FTP? Using NFS does not work or is too hard?
>
> Also for example at CERN, we write data to Castor and EOS, for this mlogger writes data to local disk,
> then the lazylogger runs a script to move the data to Castor and EOS. The example lazylogger
> scripts for this are in the MIDAS "progs" directory. But maybe you do not have a local disk and this would
> not work for you.
>
> In other news, I hope to work on mlogger and lazylogger support for cloud storage (swift and s3 apis?),
> would that be useful as replacement for FTP?
>
> K.O.
>
Good Morning Konstantin
Thanks for the fast reply. Yes, it is, Midas is one of the things we can at least improve from home.
Our experiment is planned to measure (soon) at ILL. Now since we don't use the equipment/detector from the
beamline but our own, all the data from Midas is saved on the local drive. This is fine in the first instance
but then we also need proper backup. Since our experiment is quite small, the easiest solution I came up with
is to copy all of our data to the ILL storage which has enough space and is properly backed up. The ILL data
storage allows only SFTP connections, nothing else. Since Midas has the FTP feature, having a separate FTP
logger channel seemed the easiest way to go.
Thanks for your input, I will look into how to mount SFTP and then this would also be a solution.
Since ILL only provides access via SFTP and everything else is not existent or blocked (not even ssh is possible),
this is the only thing we can work with by now.
Best regards,
Ivo |
1863
|
24 Mar 2020 |
Ivo Schulthess | Forum | Save data to FTP | > Logging directly from the midas logger to FTP is a bit cumbersome. In case of delays during login etc. this can throttle the whole DAQ chain.
> What we use in our lab is to write to local disk, then use the lazylogger (https://midas.triumf.ca/MidasWiki/index.php/Lazylogger) to copy the
> local files to a remote FTP server. This way we de-couple data taking from backup, making the system much more swift.
>
> Best,
> Stefan
Yes, see this now too. I will, therefore, try to set up the lazylogger properly. |
1874
|
07 Apr 2020 |
Ivo Schulthess | Suggestion | Sequencer loop break | I am using the Midas sequencer to run subsequent measurements in a loop, without
knowing how many iterations in advance. Therefore, I am using the "infinity"
option. Since I have other commands after the loop, it would be nice to have the
possibility to break the loop, but let the sequencer then finish the rest of the
commands.
Cheers,
Ivo |
1876
|
23 Apr 2020 |
Ivo Schulthess | Suggestion | Sequencer loop break | > You can do that with the "GOTO" statement, jumping to the first line after the loop.
>
> Here is a working example:
>
>
> LOOP runs, 5
> WAIT Seconds 3
> IF $runs > 2
> GOTO 7
> ENDIF
> ENDLOOP
> MESSAGE "Finished", 1
>
> Best,
> Stefan
Hoi Stefan
Thanks for your answer. As I understand it, this has to be in the sequence script before
running. So, in the end, it is not different than just saying "LOOP runs, 2" and
therefore the number of runs has do be known in advance as well. Or is there an option to
change the script on runtime? What I would like, is to start a sequence with "LOOP runs,
infinite" and when I come back to the experiment after falling asleep being able to break
the loop after the next iteration, but still execute everything after ENDLOOP, i.e. the
MESSAGE statement in your example. Because if I do a "Stop after current run", this seems
not to happen.
Best, Ivo |
1942
|
10 Jun 2020 |
Ivo Schulthess | Forum | slow-control equipment crashes when running multi-threaded on a remote machine | Dear all
To reduce the time needed by Midas between runs, we want to change some of our periodic equipment to multi-threaded slow-control equipment. To do that I wanted to start from
the slowcont with the multi/hv class driver and the nulldev device driver and null bus driver. The example runs fine as it is on the local midas machine and also on remote
machines. When adding the DF_MULTITHREAD flag to the device driver list, it does not run anymore on remote machines but aborts with the following assertion:
scfe: /home/neutron/packages/midas/src/midas.cxx:1569: INT cm_get_path(char*, int): Assertion `_path_name.length() > 0' failed.
Running the frontend with GDB and set a breakpoint at the exit leads to the following:
(gdb) where
#0 0x00007ffff68d599f in raise () from /lib64/libc.so.6
#1 0x00007ffff68bfcf5 in abort () from /lib64/libc.so.6
#2 0x00007ffff68bfbc9 in __assert_fail_base.cold.0 () from /lib64/libc.so.6
#3 0x00007ffff68cde56 in __assert_fail () from /lib64/libc.so.6
#4 0x000000000041efbf in cm_get_path (path=0x7fffffffd060 "P\373g", path_size=256)
at /home/neutron/packages/midas/src/midas.cxx:1563
#5 cm_get_path (path=path@entry=0x7fffffffd060 "P\373g", path_size=path_size@entry=256)
at /home/neutron/packages/midas/src/midas.cxx:1563
#6 0x0000000000453dd8 in ss_semaphore_create (name=name@entry=0x7fffffffd2c0 "DD_Input",
semaphore_handle=semaphore_handle@entry=0x67f700 <multi_driver+96>)
at /home/neutron/packages/midas/src/system.cxx:2340
#7 0x0000000000451d25 in device_driver (device_drv=0x67f6a0 <multi_driver>, cmd=<optimized out>)
at /home/neutron/packages/midas/src/device_driver.cxx:155
#8 0x00000000004175f8 in multi_init(eqpmnt*) ()
#9 0x00000000004185c8 in cd_multi(int, eqpmnt*) ()
#10 0x000000000041c20c in initialize_equipment () at /home/neutron/packages/midas/src/mfe.cxx:827
#11 0x000000000040da60 in main (argc=1, argv=0x7fffffffda48)
at /home/neutron/packages/midas/src/mfe.cxx:2757
I also tried to use the generic class driver which results in the same. I am not sure if this is a problem of the multi-threaded frontend running on a remote machine or is it
something of our system which is not properly set up. Anyway I am running out of ideas how to solve this and would appreciate any input.
Thanks in advance,
Ivo |
1946
|
12 Jun 2020 |
Ivo Schulthess | Forum | slow-control equipment crashes when running multi-threaded on a remote machine | Thanks you two once again for the very fast answers. I tested the example on the local machine and it works perfectly fine. In the meantime I also created two new drivers for our devices
and everything works with them, the improvement in time is significant and I will create drivers for all our devices where possible. If they are in a working state I can also provide
them to add to the Midas drivers. Of course if it would be possible to run the front-end also on our remote machines this would be even better. I am not experienced in any multi-threaded
programming but if I can provide any help or input, please let me know.
Have a great weekend,
Ivo |
1966
|
10 Aug 2020 |
Ivo Schulthess | Bug Report | data missing in runXXXXXX.mid | Dear all
We just started our beam time at ILL and just found yesterday that for certain
settings of our detector the data is not saved into the .mid files. Running "mdump
-l 10" online we see the data coming in as they should. Nevertheless, if we run
"mdump -x runXXXXXX.mid" offline, the data file has no events and the banks are
missing. Any ideas where the data could go lost?
Thanks in advance,
Ivo |
|