ID |
Date |
Author |
Topic |
Subject |
181
|
14 Dec 2004 |
Jan Wouters | Forum | Frontend index | What is the api call to determine the index of the frontend when specifying the
-i parameter during execution of the frontend? |
187
|
16 Dec 2004 |
Jan Wouters | Forum | cm_msg | Could someone please explain to me how cm_msg, cm_msg1, etc. all work. The
documentation is very terse.
I want to setup a fairly significant set of debugging, and error messages for a
new frontend. I need to get these messages to a logging file. I also would
like to get the error messages to the user through whatever interface Midas
normally uses for error reporting.
Jan |
2303
|
19 Nov 2021 |
Jacob Thorne | Forum | Sequencer error with ODB Inc | Hi,
I am having problems with the midas sequencer, here is my code:
1 COMMENT "Example to move a Standa stage"
2 RUNDESCRIPTION "Example movement sequence - each run is one position of a single stage
3
4 PARAM numRuns
5 PARAM sequenceNumber
6 PARAM RunNum
7
8 PARAM positionT2
9 PARAM deltapositionT2
10
11 ODBSet "/Runinfo/Run number", $RunNum
12 ODBSet "/Runinfo/Sequence number", $sequenceNumber
13
14 ODBSet "/Equipment/Neutron Detector/Settings/Detector/Type of Measurement", 2
15 ODBSet "/Equipment/Neutron Detector/Settings/Detector/Number of Time Bins", 10
16 ODBSet "/Equipment/Neutron Detector/Settings/Detector/Number of Sweeps", 1
17 ODBSet "/Equipment/Neutron Detector/Settings/Detector/Dwell Time", 100000
18
19 ODBSet "/Equipment/MTSC/Settings/Devices/Stage 2 Translation/Device Driver/Set Position", $positionT2
20
21 LOOP $numRuns
22 WAIT ODBvalue, "/Equipment/MTSC/Settings/Devices/Stage 2 Translation/Ready", ==, 1
23 TRANSITION START
24 WAIT ODBvalue, "/Equipment/Neutron Detector/Statistics/Events sent", >=, 1
25 WAIT ODBvalue, "/Runinfo/State", ==, 1
26 WAIT ODBvalue, "/Runinfo/Transition in progress", ==, 0
27 TRANSITION STOP
28 ODBInc "/Equipment/MTSC/Settings/Devices/Stage 2 Translation/Device Driver/Set Position", $deltapositionT2
29
30 ENDLOOP
31
32 ODBSet "/Runinfo/Sequence number", 0
The issue comes with line 28, the ODBInc does not work, regardless of what number I put I get the following error:
[Sequencer,ERROR] [odb.cxx:7046:db_set_data_index1,ERROR] "/Equipment/MTSC/Settings/Devices/Stage 2 Translation/Device Driver/Set Position" invalid element data size 32, expected 4
I don't see why this should happen, the format is correct and the number that I input is an int.
Sorry if this is a basic question.
Jacob |
2825
|
05 Sep 2024 |
Jack Carlton | Forum | Python frontend rate limitations? | I'm trying to get a sense of the rate limitations of a python frontend. I
understand this will vary from system to system.
I adapted two frontends from the example templates, one in C++ and one in python.
Both simply fill a midas bank with a fixed length array of zeros at a given polled
rate. However, the C++ frontend is about 100 times faster in both data and event
rates. This seems slow, even for an interpreted language like python. Furthermore,
I can effectively increase the maximum rate by concurrently running a second
python frontend (this is not the case for the C++ frontend). In short, there is
some limitation with using python here unrelated to hardware.
In my case, poll_func appears to be called at 100Hz at best. What limits the rate
that poll_func is called in a python frontend? Is there a more appropriate
solution for increasing the python frontend data/event rate than simply launching
more frontends?
I've attached my C++ and python frontend files for reference.
Thanks,
Jack |
Attachment 1: frontend.py
|
import midas
import midas.frontend
import midas.event
import numpy as np
import random
import time
class DataSimulatorEquipment(midas.frontend.EquipmentBase):
def __init__(self, client, frontend):
equip_name = "Python Data Simulator"
default_common = midas.frontend.InitialEquipmentCommon()
default_common.equip_type = midas.EQ_POLLED
default_common.buffer_name = "SYSTEM"
default_common.trigger_mask = 0
default_common.event_id = 2
default_common.period_ms = 100
default_common.read_when = midas.RO_RUNNING
default_common.log_history = 1
midas.frontend.EquipmentBase.__init__(self, client, equip_name, default_common)
print("Initialization complete")
self.set_status("Initialized")
self.frontend = frontend
def readout_func(self):
event = midas.event.Event()
# Create a bank for zero buffer
event.create_bank("CR00", midas.TID_SHORT, self.frontend.zero_buffer)
# Simulate the addition of `data` in the periodic event
'''
data_block = []
data_block.extend(self.frontend.data)
# Append the simulated data to the event
event.create_bank("CR00", midas.TID_SHORT, data_block)
'''
return event
def poll_func(self):
current_time = time.time()
if current_time - self.frontend.last_poll_time >= self.frontend.poll_time:
self.frontend.last_poll_time = current_time
self.frontend.poll_count += 1
self.frontend.poll_timestamps.append(current_time)
return True # Indicate that an event is available
return False # No event available yet
class DataSimulatorFrontend(midas.frontend.FrontendBase):
def __init__(self):
midas.frontend.FrontendBase.__init__(self, "DataSimulator-Python")
# Data and zero buffer initialization
self.data = []
self.zero_buffer = []
self.generator = random.Random()
self.total_data_size = 1250000
self.load_data_from_file("fake_data.txt")
self.init_zero_buffer()
# Polling variables
self.poll_time = 0.001 # Poll time in seconds
self.last_poll_time = time.time()
self.poll_count = 0
self.poll_timestamps = []
self.add_equipment(DataSimulatorEquipment(self.client, self))
def load_data_from_file(self, filename):
try:
with open(filename, 'r') as file:
for line in file:
values = [int(value) for value in line.strip().split(',')]
self.data.extend(values)
print(f"Loaded data from {filename}: {self.data[:10]}...") # Display the first few values for verification
except IOError as e:
print(f"Error opening file: {e}")
def init_zero_buffer(self):
self.zero_buffer = [0] * self.total_data_size
print(f"Initialized zero buffer with {self.total_data_size } zeros.")
def begin_of_run(self, run_number):
self.set_all_equipment_status("Running", "greenLight")
self.client.msg(f"Frontend has started run number {run_number}")
return midas.status_codes["SUCCESS"]
def end_of_run(self, run_number):
self.set_all_equipment_status("Finished", "greenLight")
self.client.msg(f"Frontend has ended run number {run_number}")
# Print poll function statistics at the end of the run
self.print_poll_stats()
return midas.status_codes["SUCCESS"]
def frontend_exit(self):
print("Frontend is exiting.")
def print_poll_stats(self):
if len(self.poll_timestamps) > 1:
intervals = [self.poll_timestamps[i] - self.poll_timestamps[i-1] for i in range(1, len(self.poll_timestamps))]
avg_interval = sum(intervals) / len(intervals)
print(f"Poll function was called {self.poll_count} times.")
print(f"Average interval between poll calls: {avg_interval:.6f} seconds")
else:
print(f"Poll function was called {self.poll_count} times. Not enough data for interval calculation.")
if __name__ == "__main__":
with DataSimulatorFrontend() as my_fe:
my_fe.run()
|
Attachment 2: frontend.cxx
|
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <string.h>
#include <iostream>
#include <fstream>
#include <sstream>
#include <vector>
#include "midas.h"
#include "mfe.h"
#include <stdlib.h> // Include the header for rand()
#include <random> // Include for random number generation
void trigger_update(INT, INT, void*);
/*-- Globals -------------------------------------------------------*/
/* The frontend name (client name) as seen by other MIDAS clients */
const char *frontend_name = "DataSimulator";
/* The frontend file name, don't change it */
const char *frontend_file_name = __FILE__;
/* frontend_loop is called periodically if this variable is TRUE */
BOOL frontend_call_loop = FALSE;
/* a frontend status page is displayed with this frequency in ms */
INT display_period = 1000;
/* maximum event size produced by this frontend */
INT max_event_size = 1024 * 1014;
/* maximum event size for fragmented events (EQ_FRAGMENTED) */
INT max_event_size_frag = 5 * max_event_size;
/* buffer size to hold events */
INT event_buffer_size = 5 * max_event_size;
// Define a vector to store 16-bit words
std::vector<int16_t> data; // Define a global vector to store 16-bit signed integers
// Global variable to keep track of the last poll time
std::chrono::steady_clock::time_point last_poll_time;
const std::chrono::microseconds polling_interval(300); // Poll every 300 microsecond
// Random number generator for generating data
std::mt19937 generator;
std::uniform_int_distribution<short> distribution(-32768, 32767); // Define the range of random values (short range)
// Global variable to hold the zero buffer
std::vector<short> zero_buffer;
/*-- Function declarations -----------------------------------------*/
INT frontend_init(void);
INT frontend_exit(void);
INT begin_of_run(INT run_number, char *error);
INT end_of_run(INT run_number, char *error);
INT pause_run(INT run_number, char *error);
INT resume_run(INT run_number, char *error);
INT frontend_loop(void);
INT read_trigger_event(char *pevent, INT off);
INT read_periodic_event(char *pevent, INT off);
INT poll_event(INT source, INT count, BOOL test);
INT interrupt_configure(INT cmd, INT source, POINTER_T adr);
/*-- Equipment list ------------------------------------------------*/
BOOL equipment_common_overwrite = TRUE;
EQUIPMENT equipment[] = {
{"Data Simulator", /* equipment name */
{2, 0, /* event ID, trigger mask */
"SYSTEM", /* event buffer */
EQ_POLLED, /* equipment type */
0, /* event source */
"MIDAS", /* format */
TRUE, /* enabled */
RO_RUNNING | RO_TRANSITIONS | /* read when running and on transitions */
RO_ODB, /* and update ODB */
10, /* read every sec */
0, /* stop run after this event limit */
0, /* number of sub events */
TRUE, /* log history */
"", "", "",},
read_trigger_event /* readout routine */
},
{""}
};
/*-- Trigger Update ------------------------------------------------*/
void trigger_update(INT hDB, INT hkey,void*)
{
}
/*-- Frontend Init -------------------------------------------------*/
int frontend_init() {
// Open the file for reading
std::ifstream inputFile("fake_data.txt");
if (!inputFile) {
std::cerr << "Error opening the file." << std::endl;
return 1;
}
std::cout << "Reading and converting data:" << std::endl;
std::string line;
while (std::getline(inputFile, line)) {
std::istringstream iss(line);
std::string token;
while (std::getline(iss, token, ',')) {
int16_t value;
std::istringstream(token) >> value;
data.push_back(value);
}
}
// Print the converted data
for (int i = 0; i < data.size(); i++) {
std::cout << " " << data[i];
}
// Close the file
inputFile.close();
if (data.empty()) {
std::cerr << "No data was converted." << std::endl;
} else {
std::cout << std::endl << "Conversion completed." << std::endl;
}
// Initialize random number generator
std::random_device rd; // Obtain a random number from hardware
generator = std::mt19937(rd()); // Seed the generator
// Define the total number of zero data points
const int total_data_size = 50000; // Adjust size as needed
// Create and initialize the buffer of zeros
zero_buffer.resize(total_data_size, 0);
return SUCCESS;
}
/*-- Frontend Exit -------------------------------------------------*/
INT frontend_exit()
{
return SUCCESS;
}
/*-- Begin of Run --------------------------------------------------*/
INT begin_of_run(INT run_number, char *error)
{
return SUCCESS;
}
/*-- End of Run ----------------------------------------------------*/
INT end_of_run(INT run_number, char *error)
{
return SUCCESS;
}
/*-- Pause Run -----------------------------------------------------*/
INT pause_run(INT run_number, char *error)
{
return SUCCESS;
}
/*-- Resume Run ----------------------------------------------------*/
INT resume_run(INT run_number, char *error)
{
return SUCCESS;
}
/*-- Frontend Loop -------------------------------------------------*/
INT frontend_loop()
{
/* if frontend_call_loop is true, this routine gets called when
the frontend is idle or once between every event */
return SUCCESS;
}
/*------------------------------------------------------------------*/
/********************************************************************\
Readout routines for different events
\********************************************************************/
/*-- Trigger event routines ----------------------------------------*/
INT poll_event(INT source, INT count, BOOL test) {
// Get the current time
auto now = std::chrono::steady_clock::now();
// Check if enough time has passed since the last poll
if (now - last_poll_time >= polling_interval) {
// Update the last poll time
last_poll_time = now;
// Return TRUE to indicate that an event is available
return TRUE;
}
// If test is TRUE, don't return anything
if (test) {
return FALSE;
}
// Otherwise, return FALSE to indicate no event available
return FALSE;
}
/*-- Interrupt configuration ---------------------------------------*/
INT interrupt_configure(INT cmd, INT source, POINTER_T adr)
{
switch (cmd) {
case CMD_INTERRUPT_ENABLE:
break;
case CMD_INTERRUPT_DISABLE:
break;
case CMD_INTERRUPT_ATTACH:
break;
case CMD_INTERRUPT_DETACH:
break;
}
return SUCCESS;
}
/*-- Event readout -------------------------------------------------*/
INT read_trigger_event(char *pevent, INT off)
{
short *pdata;
// Init bank structure
bk_init32(pevent);
// Create a bank named "CR00" and specify the data type as TID_SHORT
bk_create(pevent, "CR00", TID_SHORT, (void **)&pdata);
// Use memcpy to copy the buffer of zeros into the MIDAS bank
memcpy(pdata, zero_buffer.data(), zero_buffer.size() * sizeof(short));
// Adjust pdata pointer
pdata += zero_buffer.size(); // Move the pointer past the copied data
// Close the bank
bk_close(pevent, pdata);
return bk_size(pevent);
}
/*-- Periodic event ------------------------------------------------*/
INT read_periodic_event(char *pevent, INT off)
{
short *pdata; // Change the data type to short
// Init bank structure
bk_init32(pevent);
// Create a bank named "CR00" and specify the data type as TID_SHORT
bk_create(pevent, "CR00", TID_SHORT, (void **)&pdata);
// Repeat the loop 5000 times
for (int repeat = 0; repeat < 400; repeat++) {
for (int i = 0; i < data.size(); i++) {
*pdata++ = data[i];
}
}
// Close the bank
bk_close(pevent, pdata);
return bk_size(pevent);
}
|
Draft
|
05 Sep 2024 |
Jack Carlton | Forum | Python frontend rate limitations? |
Thank you, this was very helpful.
> First the general advice: if you reduce the "period" of your equipment, then your function will get called more frequently. You can set it to 0 and we'll call it as often as possible. You can set this in the ODB at "/Equipment/Python Data Simulator/Common/Period"
Thanks, I thought that was just for periodic triggering (or at least that's how I've used it in C++ frontends). Changing this allowed me to get past the 100Hz event rate cap I described.
> If that's still not fast enough, then you can return a *list* of events from your readout_func. I've seen real-world cases of 25kHz+ of midas events generated in this fashion.
>
>
> However in your case the limitation is likely that you're sending 1.25MB per event and we have a lot of data marshalling to do between the python and C++ layer. In particular it takes 15ms on my machine to just pack the data into a memory buffer (see timeit command below). I am sure there must be a faster way to do this packing, especially in the case where the bank contains a numpy array rather than a python list.
>
> I'll add it to my to-do list to investigate improving the performance of medium-to-large events in the python code.
>
>
> Cheers,
> Ben
> P.S. You may have a bug in your calculations (depending on how you did your testing). In poll_func I think you should be updating the stats every time the function is called, not just the times when you return True.
I had tested the way you described at first, then later changed
> P.P.S. Command I used to test how slow it is to pack the data. One-time setup of creating the buffers, then multiple tests of the pack_into function:
>
> python -m timeit -s "import struct;import ctypes;arr = [0]*1250001;buf = ctypes.create_string_buffer(10000000);fmt = \">1250000d\"" "struct.pack_into(fmt, buf, *arr)"
> 20 loops, best of 5: 15.3 msec per loop |
2829
|
06 Sep 2024 |
Jack Carlton | Forum | Python frontend rate limitations? | Thanks for the responses, they were very helpful.
>First the general advice: if you reduce the "period" of your equipment, then your function will get called more frequently. You can set it to 0 and we'll
call it as often as possible.
Thanks, this solves the event rate limitation I described. I didn't think to change this because the "period" did not affect the observed rate in C (and now
I know why thanks to Stefan).
A couple more questions:
1.
For me,
python -m timeit -s "import struct;import ctypes;arr = [0]*1250001;buf = ctypes.create_string_buffer(10000000);fmt = \">1250000d\"" "struct.pack_into(fmt,
buf, *arr)"
10 loops, best of 3: 43.7 msec per loop
which suggests my maximum data rate is about 1.25 MB * 1000/43.7 Hz = 23 MB/s (?). But I see data rates up to 60 MB/s with a python frontend. Am I
misinterpreting the meaning of this result?
2. I can effectively bypass the rate limitations in python by running two concurrent frontends. For example, with one python frontend at best I can generate
60 MB/s of data (setting "period" to 0 now); but with two frontends I can double this to 120 MB/s. This implies one python frontend is not bottlenecked by
hardware limitations in my case.
Am I doing something wrong to artificially bottleneck my frontends? Perhaps there's a multi-threading solution I can implement to avoid needing multiple
frontends?
Thanks,
Jack |
2885
|
05 Nov 2024 |
Jack Carlton | Forum | How to properly write a client listens for events on a given buffer? | If there's some template for writing a client to access event data, that would be
very useful (and you can probably just ignore the context I gave below in that
case).
Some context:
Quite a while ago, I wrote the attached "data pipeline" client whose job was to
listen for events, copy their data, and pipe them to a python script. I believe I
just stole bits and pieces from mdump.cxx to accomplish this. Later I wrote the
attached wrapper class "MidasConnector.cpp" and a main.cpp to generalize
data_pipeline.cxx a bit. There were a lot of iterations to the code where I had the
below problems; so don't take the logic in the attached code as the exact code that
caused the issues below.
However, I'm unable to resolve a couple issues:
1. If a timeout is set, everything will work until that timeout is reached. Then
regardless of what kind of logic I tried to implement (retry receiving event,
disconnect and reconnect client, etc.) the client would refuse to receive more data.
2. When I ctrl-C main, it hangs; this is expected because it's stuck in a while
loop. But because I can't set a timeout I have to ctrl-C twice; this would
occasionally corrupt the ODB which was not ideal. I was able to get around this with
some impractical solution involving ncurses I believe.
Thanks,
Jack |
Attachment 1: data_pipeline_(2).cxx
|
#include "midas.h"
#include "msystem.h"
#include "mrpc.h"
#include "mdsupport.h"
#include <iostream>
#include <unistd.h>
#include <stdio.h> // Added for popen
#include <stdlib.h> // Added for malloc and free
INT hBufEvent;
void process_event(EVENT_HEADER *pheader) {
printf("Received event #%d\n", pheader->serial_number);
printf("Event ID: %d\n", pheader->event_id);
printf("Data Size: %d bytes\n", pheader->data_size);
printf("Timestamp: %d\n", pheader->time_stamp);
printf("Trigger mask: %d\n", pheader->trigger_mask);
// Print a marker to indicate the start of serialized data
printf("EVENT_DATA_START\n");
// Serialize and print the event data
int* eventData = (int*)((char*)pheader + sizeof(EVENT_HEADER));
int numIntegers = (pheader->data_size - sizeof(EVENT_HEADER)) / sizeof(int);
for (int i = 0; i < 8; ++i) {
printf("%d ", eventData[i]);
}
printf("\n");
// Process the event here
}
int main() {
HNDLE hDB, hKey;
char host_name[HOST_NAME_LENGTH], expt_name[NAME_LENGTH], str[80];
char buf_name[32] = EVENT_BUFFER_NAME, rep_file[128];
unsigned int status, start_time, stop_time;
INT ch, request_id, size, get_flag, action, single, i;
// Define the maximum event size you expect to receive
INT max_event_size = 4000;
// Allocate memory for storing event data dynamically
void* event_data = malloc(max_event_size);
printf("1\n");
/* Get if existing the pre-defined experiment */
cm_get_environment(host_name, sizeof(host_name), expt_name, sizeof(expt_name));
// Print host_name
printf("host_name = %s\n", host_name);
// Print expt_name
printf("expt_name = %s\n", expt_name);
printf("2\n");
/* connect to the experiment */
status = cm_connect_experiment(host_name, expt_name, "data_pipeline", 0);
if (status != CM_SUCCESS) {
return 1;
}
printf("3\n");
status = bm_open_buffer(buf_name, DEFAULT_BUFFER_SIZE, &hBufEvent);
if (status != BM_SUCCESS && status != BM_CREATED) {
cm_msg(MERROR, "data_pipeline", "Cannot open buffer \"%s\", bm_open_buffer() status %d", buf_name, status);
return 1;
}
printf("4\n");
/* set the buffer cache size if requested */
bm_set_cache_size(hBufEvent, 100000, 0);
printf("5\n");
/* place a request for a specific event id */
status = bm_request_event(hBufEvent, EVENTID_ALL, TRIGGER_ALL, GET_ALL, &request_id, NULL); // Use NULL as the callback routine
printf("6\n");
printf("status = %d\n",status);
// Open a pipe to a Python script for data transfer
FILE* pipe = popen("python3 data_pipeline.py", "w");
if (pipe == NULL) {
perror("popen");
return 1;
}
// Enter the event processing loop
while (1) {
// Use the address of max_event_size in bm_receive_event
status = bm_receive_event(hBufEvent, event_data, &max_event_size, BM_WAIT); // Wait for new data indefinitely
if (status == BM_SUCCESS) {
//process_event((EVENT_HEADER*)((char*)event_data + sizeof(EVENT_HEADER)));
// Send the event data to the Python script via the pipe
fprintf(pipe, "EVENT_DATA_START\n");
int* eventData = (int*)((char*)event_data + sizeof(EVENT_HEADER));
int numIntegers = (max_event_size - sizeof(EVENT_HEADER)) / sizeof(int);
for (int i = 4; i < 12; ++i) {
fprintf(pipe, "%d ", eventData[i]);
}
fprintf(pipe, "\n");
fflush(pipe); // Flush the buffer to ensure data is sent immediately
} else {
printf("Error receiving event: %d\n", status);
break; // Exit the loop if an error occurs
}
}
// Close the pipe
pclose(pipe);
// Free the dynamically allocated memory
free(event_data);
cm_disconnect_experiment();
printf("7\n");
return 1;
}
|
Attachment 2: MidasConnector.cpp
|
#include "MidasConnector.h"
MidasConnector::MidasConnector(const char* clientName) {
// Initialize client name
strncpy(client_name_, clientName, NAME_LENGTH);
// Get host name and experiment name from environment
cm_get_environment(host_name_, sizeof(host_name_), experiment_name_, sizeof(experiment_name_));
// Initialize other private variables if needed
event_id = EVENTID_ALL; // Initialize with default value
trigger_mask = TRIGGER_ALL; // Initialize with default value
sampling_type = GET_ALL; // Initialize with default value (renamed from get_flags)
buffer_size = DEFAULT_BUFFER_SIZE; // Initialize with default value
timeout_millis = BM_WAIT;
strncpy(buffer_name, EVENT_BUFFER_NAME, sizeof(buffer_name)); // Initialize with default value
}
// Getters for the private variables
short MidasConnector::getEventId() const {
return event_id;
}
short MidasConnector::getTriggerMask() const {
return trigger_mask;
}
int MidasConnector::getSamplingType() const {
return sampling_type;
}
int MidasConnector::getBufferSize() const {
return buffer_size;
}
const char* MidasConnector::getBufferName() const {
return buffer_name;
}
int MidasConnector::getTimeout() const {
return timeout_millis;
}
HNDLE MidasConnector::getEventBufferHandle() const {
return hBufEvent;
}
// Setters for the private variables
void MidasConnector::setEventId(short eventId) {
event_id = eventId;
}
void MidasConnector::setTriggerMask(short triggerMask) {
trigger_mask = triggerMask;
}
void MidasConnector::setSamplingType(int samplingType) {
sampling_type = samplingType;
}
void MidasConnector::setBufferSize(int bufferSize) {
buffer_size = bufferSize;
}
void MidasConnector::setBufferName(const char* bufferName) {
strncpy(buffer_name, bufferName, sizeof(buffer_name));
}
void MidasConnector::setTimeout(int timeoutMillis) {
timeout_millis = timeoutMillis;
}
void MidasConnector::setEventBufferHandle(HNDLE eventBufferHandle) {
hBufEvent = eventBufferHandle;
}
bool MidasConnector::ConnectToExperiment() {
// Connect to the experiment
int status = cm_connect_experiment(host_name_, experiment_name_, client_name_, NULL);
if (status != CM_SUCCESS) {
// Handle connection error
return false;
}
return true;
}
void MidasConnector::DisconnectFromExperiment() {
// Disconnect from the experiment
cm_disconnect_experiment();
}
bool MidasConnector::OpenEventBuffer() {
int status = bm_open_buffer(buffer_name, buffer_size, &hBufEvent);
if (status != BM_SUCCESS && status != BM_CREATED) {
cm_msg(MERROR, client_name_, "Cannot open buffer \"%s\", bm_open_buffer() status %d", buffer_name, status);
return false;
}
return true;
}
bool MidasConnector::SetCacheSize(int cacheSize) {
bm_set_cache_size(hBufEvent, cacheSize, 0);
return true;
}
bool MidasConnector::RequestEvent() {
int request_id;
int status = bm_request_event(hBufEvent, event_id, trigger_mask, sampling_type, &request_id, NULL);
return status == BM_SUCCESS;
}
bool MidasConnector::ReceiveEvent(void* eventBuffer, int& maxEventSize) {
int status = bm_receive_event(hBufEvent, eventBuffer, &maxEventSize, timeout_millis);
return status == BM_SUCCESS;
}
|
Attachment 3: main.cpp
|
#include "event_processor/EventProcessor.h"
#include "data_transmitter/DataTransmitter.h"
#include "midas_connector/MidasConnector.h"
#include "json.hpp"
#include <fstream>
INT hBufEvent1;
INT hBufEvent2;
// Function to initialize MIDAS and open an event buffer
bool initializeMidas(MidasConnector& midasConnector, const nlohmann::json& config) {
// Set the MidasConnector properties based on the config
midasConnector.setEventId(config["eventId"].get<short>());
midasConnector.setTriggerMask(config["triggerMask"].get<short>());
midasConnector.setSamplingType(config["samplingType"].get<int>());
midasConnector.setBufferSize(config["bufferSize"].get<int>());
midasConnector.setBufferName(config["bufferName"].get<std::string>().c_str());
midasConnector.setBufferSize(config["bufferSize"].get<int>());
// Call the ConnectToExperiment method
if (!midasConnector.ConnectToExperiment()) {
return false;
}
// Call the OpenEventBuffer method
if (!midasConnector.OpenEventBuffer()) {
return false;
}
// Set the buffer cache size if requested
midasConnector.SetCacheSize(config["cacheSize"].get<int>());
// Place a request for a specific event id
if (!midasConnector.RequestEvent()) {
return false;
}
return true;
}
int main() {
// Read configuration from a JSON file
nlohmann::json config;
std::ifstream configFile("config.json");
configFile >> config;
configFile.close();
// Initialize MidasConnector and connect to the MIDAS experiment
MidasConnector midasConnector(config["clientName"].get<std::string>().c_str());
if (!initializeMidas(midasConnector, config)) {
printf("Error: Failed to initialize MIDAS.\n");
return 1;
}
// Read the maximum event size from the JSON configuration
INT max_event_size = config["maxEventSize"].get<int>();
// Allocate memory for storing event data dynamically
void* event_data = malloc(max_event_size);
// Initialize EventProcessor with detector mapping file and verbosity flag
EventProcessor eventProcessor(config["detectorMappingFile"].get<std::string>(), config["verbose"].get<bool>());
// Initialize DataTransmitter with the ZeroMQ address
DataTransmitter dataPublisher(config["zmqAddress"].get<std::string>());
// Connect to the ZeroMQ server
if (!dataPublisher.bind()) {
// Handle connection error
printf("Error: Failed to bind to port %s.\n", config["zmqAddress"].get<std::string>().c_str());
return 1;
} else {
printf("Connected to the ZeroMQ server.\n");
}
// Event processing loop
while (true) {
midasConnector.ReceiveEvent(event_data, max_event_size);
//Prcoess data once we have it
eventProcessor.processEvent(event_data, max_event_size);
// Serialize the event data with EventProcessor and store it in serializedData
std::string serializedData = eventProcessor.getSerializedData();
// Send the serialized data to the ZeroMQ server with DataTransmitter
if (!dataPublisher.publish(serializedData)) {
// Handle send error
printf("Error: Failed to send serialized data.\n");
}
}
// Cleanup and finalize your application
midasConnector.DisconnectFromExperiment(); // Disconnect from the MIDAS experiment
return 0;
}
|
1859
|
23 Mar 2020 |
Ivo Schulthess | Forum | Save data to FTP | Dear all
I try to save data to an FTP server but don't get any data on the server. Midas does not complain or message any error but also nothing gets saved. Does somebody have experience with this? I use the following settings for the ODB mlogger channel settings: Type: FTP, Filename: server.com, 21, user, pw, ., run%06d.mid, Format: MIDAS, Output: FILE. What would be the Output: FTP setting for? I tried this but it does not work at all.
Thanks in advance,
Ivo |
1861
|
24 Mar 2020 |
Ivo Schulthess | Forum | Save data to FTP | > > I try to save data to an FTP server but don't get any data on the server. Midas does not complain or message any error but also nothing gets saved. Does somebody have experience with this? I use the following settings for the ODB mlogger channel settings: Type: FTP, Filename: server.com, 21, user, pw, ., run%06d.mid, Format: MIDAS, Output: FILE. What would be the Output: FTP setting for? I tried this but it does not work at all.
>
> Hi, Ivo, good to hear from a midas user in these difficult times.
>
> We do not use FTP at TRIUMF, but Stefan asked us to keep FTP alive and working, so we should be able
> to get you going. I will try to find the FTP instructions for you, I am pretty sure I have them somewhere.
>
> In the mean time, I am very curious why you are using a FTP to record data, is it some kind
> of data appliance where simplest input for data is FTP? Using NFS does not work or is too hard?
>
> Also for example at CERN, we write data to Castor and EOS, for this mlogger writes data to local disk,
> then the lazylogger runs a script to move the data to Castor and EOS. The example lazylogger
> scripts for this are in the MIDAS "progs" directory. But maybe you do not have a local disk and this would
> not work for you.
>
> In other news, I hope to work on mlogger and lazylogger support for cloud storage (swift and s3 apis?),
> would that be useful as replacement for FTP?
>
> K.O.
>
Good Morning Konstantin
Thanks for the fast reply. Yes, it is, Midas is one of the things we can at least improve from home.
Our experiment is planned to measure (soon) at ILL. Now since we don't use the equipment/detector from the
beamline but our own, all the data from Midas is saved on the local drive. This is fine in the first instance
but then we also need proper backup. Since our experiment is quite small, the easiest solution I came up with
is to copy all of our data to the ILL storage which has enough space and is properly backed up. The ILL data
storage allows only SFTP connections, nothing else. Since Midas has the FTP feature, having a separate FTP
logger channel seemed the easiest way to go.
Thanks for your input, I will look into how to mount SFTP and then this would also be a solution.
Since ILL only provides access via SFTP and everything else is not existent or blocked (not even ssh is possible),
this is the only thing we can work with by now.
Best regards,
Ivo |
1863
|
24 Mar 2020 |
Ivo Schulthess | Forum | Save data to FTP | > Logging directly from the midas logger to FTP is a bit cumbersome. In case of delays during login etc. this can throttle the whole DAQ chain.
> What we use in our lab is to write to local disk, then use the lazylogger (https://midas.triumf.ca/MidasWiki/index.php/Lazylogger) to copy the
> local files to a remote FTP server. This way we de-couple data taking from backup, making the system much more swift.
>
> Best,
> Stefan
Yes, see this now too. I will, therefore, try to set up the lazylogger properly. |
1874
|
07 Apr 2020 |
Ivo Schulthess | Suggestion | Sequencer loop break | I am using the Midas sequencer to run subsequent measurements in a loop, without
knowing how many iterations in advance. Therefore, I am using the "infinity"
option. Since I have other commands after the loop, it would be nice to have the
possibility to break the loop, but let the sequencer then finish the rest of the
commands.
Cheers,
Ivo |
1876
|
23 Apr 2020 |
Ivo Schulthess | Suggestion | Sequencer loop break | > You can do that with the "GOTO" statement, jumping to the first line after the loop.
>
> Here is a working example:
>
>
> LOOP runs, 5
> WAIT Seconds 3
> IF $runs > 2
> GOTO 7
> ENDIF
> ENDLOOP
> MESSAGE "Finished", 1
>
> Best,
> Stefan
Hoi Stefan
Thanks for your answer. As I understand it, this has to be in the sequence script before
running. So, in the end, it is not different than just saying "LOOP runs, 2" and
therefore the number of runs has do be known in advance as well. Or is there an option to
change the script on runtime? What I would like, is to start a sequence with "LOOP runs,
infinite" and when I come back to the experiment after falling asleep being able to break
the loop after the next iteration, but still execute everything after ENDLOOP, i.e. the
MESSAGE statement in your example. Because if I do a "Stop after current run", this seems
not to happen.
Best, Ivo |
1942
|
10 Jun 2020 |
Ivo Schulthess | Forum | slow-control equipment crashes when running multi-threaded on a remote machine | Dear all
To reduce the time needed by Midas between runs, we want to change some of our periodic equipment to multi-threaded slow-control equipment. To do that I wanted to start from
the slowcont with the multi/hv class driver and the nulldev device driver and null bus driver. The example runs fine as it is on the local midas machine and also on remote
machines. When adding the DF_MULTITHREAD flag to the device driver list, it does not run anymore on remote machines but aborts with the following assertion:
scfe: /home/neutron/packages/midas/src/midas.cxx:1569: INT cm_get_path(char*, int): Assertion `_path_name.length() > 0' failed.
Running the frontend with GDB and set a breakpoint at the exit leads to the following:
(gdb) where
#0 0x00007ffff68d599f in raise () from /lib64/libc.so.6
#1 0x00007ffff68bfcf5 in abort () from /lib64/libc.so.6
#2 0x00007ffff68bfbc9 in __assert_fail_base.cold.0 () from /lib64/libc.so.6
#3 0x00007ffff68cde56 in __assert_fail () from /lib64/libc.so.6
#4 0x000000000041efbf in cm_get_path (path=0x7fffffffd060 "P\373g", path_size=256)
at /home/neutron/packages/midas/src/midas.cxx:1563
#5 cm_get_path (path=path@entry=0x7fffffffd060 "P\373g", path_size=path_size@entry=256)
at /home/neutron/packages/midas/src/midas.cxx:1563
#6 0x0000000000453dd8 in ss_semaphore_create (name=name@entry=0x7fffffffd2c0 "DD_Input",
semaphore_handle=semaphore_handle@entry=0x67f700 <multi_driver+96>)
at /home/neutron/packages/midas/src/system.cxx:2340
#7 0x0000000000451d25 in device_driver (device_drv=0x67f6a0 <multi_driver>, cmd=<optimized out>)
at /home/neutron/packages/midas/src/device_driver.cxx:155
#8 0x00000000004175f8 in multi_init(eqpmnt*) ()
#9 0x00000000004185c8 in cd_multi(int, eqpmnt*) ()
#10 0x000000000041c20c in initialize_equipment () at /home/neutron/packages/midas/src/mfe.cxx:827
#11 0x000000000040da60 in main (argc=1, argv=0x7fffffffda48)
at /home/neutron/packages/midas/src/mfe.cxx:2757
I also tried to use the generic class driver which results in the same. I am not sure if this is a problem of the multi-threaded frontend running on a remote machine or is it
something of our system which is not properly set up. Anyway I am running out of ideas how to solve this and would appreciate any input.
Thanks in advance,
Ivo |
1946
|
12 Jun 2020 |
Ivo Schulthess | Forum | slow-control equipment crashes when running multi-threaded on a remote machine | Thanks you two once again for the very fast answers. I tested the example on the local machine and it works perfectly fine. In the meantime I also created two new drivers for our devices
and everything works with them, the improvement in time is significant and I will create drivers for all our devices where possible. If they are in a working state I can also provide
them to add to the Midas drivers. Of course if it would be possible to run the front-end also on our remote machines this would be even better. I am not experienced in any multi-threaded
programming but if I can provide any help or input, please let me know.
Have a great weekend,
Ivo |
1966
|
10 Aug 2020 |
Ivo Schulthess | Bug Report | data missing in runXXXXXX.mid | Dear all
We just started our beam time at ILL and just found yesterday that for certain
settings of our detector the data is not saved into the .mid files. Running "mdump
-l 10" online we see the data coming in as they should. Nevertheless, if we run
"mdump -x runXXXXXX.mid" offline, the data file has no events and the banks are
missing. Any ideas where the data could go lost?
Thanks in advance,
Ivo |
1968
|
10 Aug 2020 |
Ivo Schulthess | Bug Report | data missing in runXXXXXX.mid | > > Dear all
> >
> > We just started our beam time at ILL and just found yesterday that for certain
> > settings of our detector the data is not saved into the .mid files. Running "mdump
> > -l 10" online we see the data coming in as they should. Nevertheless, if we run
> > "mdump -x runXXXXXX.mid" offline, the data file has no events and the banks are
> > missing. Any ideas where the data could go lost?
> >
> > Thanks in advance,
> > Ivo
>
> Have you checked
>
> /Logger/Channels/0/Settings/Event ID = -1
> /Logger/Channels/0/Settings/Trigger mask = -1
>
> If these settings are not -1, they filter the data stream for certain events and trigger
> masks.
>
> Stefan
Good morning Stefan
Both set to -1. We only have one logging channel. If we run a sequence with a few runs and the
same settings, sometimes data is in the .mid file and sometimes it is not.
Best,
Ivo |
1970
|
10 Aug 2020 |
Ivo Schulthess | Bug Report | data missing in runXXXXXX.mid | > Then I'm running out of ideas. Things I would check:
>
> - Are the file sizes about the same?
>
> - When you dump the .mid file, you do you see your bank names?
>
> This would tell you if the events are really missing or if mdump would just not find them.
>
> But I guess without being able to debug the system at ILL I cannot be of any more help. You are the
> first one reporting such a problem, so it must have to do with your local setup.
>
> Stefan
So I did a quick check. The file size is about the same (322K and 329K). When I dump the .mid I don't see
the banks. It only prints two lines with "------ Event# 0 ------" and "------ Event# 1 ------" whereas for
the file with data I get the two banks with all the data. Our online analyzer also fails to see the banks.
Is there another way to check what is in the .mid file?
Best,
Ivo |
1972
|
10 Aug 2020 |
Ivo Schulthess | Bug Report | data missing in runXXXXXX.mid | > with "dump" I meant a true object dump like "hexdump -C run000001.mid". I produced a file with ADC0 and TDC0
> banks (that's the example from the distribution under exampels/experiments/frontend.cxx), and I get
>
> ....
> 00024220 01 00 00 00 41 44 43 30 04 00 08 00 eb 06 35 04 |....ADC0......5.|
> 00024230 31 09 4f 06 54 44 43 30 04 00 08 00 93 04 fb 07 |1.O.TDC0........|
> 00024240 5c 09 88 0b 01 00 00 00 01 00 00 00 2a 0b 31 5f |\...........*.1_|
> 00024250 28 00 00 00 20 00 00 00 01 00 00 00 41 44 43 30 |(... .......ADC0|
> 00024260 04 00 08 00 c3 09 24 05 85 05 f3 06 54 44 43 30 |......$.....TDC0|
> 00024270 04 00 08 00 88 08 2d 03 3b 0d d6 02 01 00 00 00 |......-.;.......|
> 00024280 02 00 00 00 2a 0b 31 5f 28 00 00 00 20 00 00 00 |....*.1_(... ...|
> 00024290 01 00 00 00 41 44 43 30 04 00 08 00 a5 0a 69 09 |....ADC0......i.|
>
> where you clearly see the ADC0 and TDC0 banks.
>
> Stefan
So at least I learned something new. I tried it with the hexdump and the banks are not existent in the .mid file. I
only have the ODB inside the file. The 7K difference in size is actually just about what I expect to be the data
(1792 x 4 bytes)
Best, Ivo |
1975
|
10 Aug 2020 |
Ivo Schulthess | Bug Report | data missing in runXXXXXX.mid | > Have you tried longer files? Maybe a few 100 MB or so. Maybe a buffer is not flushed correctly at the end of a run.
Yes, I did. This 7 KB of the data bank is about the limit. If we go only 1 KB higher it seems that we save all data. In
our specific case, this is the number of time bins (256 pixels with 7 time bins results in data loss, with 8 time bins it
seems to be okay, data type is DWORD).
Of course, a workaround for us is to save at least 8 time bins and throw 7 of them away later on. Nevertheless, since we
are only in the commissioning phase now this is okay, I would just like to avoid data loss in the data taking phase of the
experiment so knowing where the problem origins could help.
I did another test with another FE running that produces a lot of data. The behavior is the same though. If the bank size
is less than about 8 KB, the bank is not saved anymore. But probably this is anyway the expected behavior since it is a
different FE that produces the data.
So if it is coming from the buffer, is there something I could change to test or solve the problem?
Best, Ivo |
1980
|
11 Aug 2020 |
Ivo Schulthess | Bug Report | data missing in runXXXXXX.mid | > It would be good to pin point there the data is lost. This is the sequence:
>
> frontend user code -> mfe.c code -> SYSTEM buffer -> mlogger -> disk
>
> To see if correct data arrives to the SYSTEM buffer, run:
> mdump -z SYSTEM
>
> To see if mlogger is receiving events from the SYSTEM buffer, run:
> mlogger -v ### mlogger should report all events, history and data
>
> To see if mlogger writes events to disk, examine the disk file (in this case, you already did, data is not there).
>
> I would guess that your data does not make it out from the frontend (mdump shows "nothing"),
> if data were to arrive into the SYSTEM buffer, it would make it to disk, unless
> mlogger is misconfigured (but you already checked that).
>
> If you have trouble with the frontend framework code, you can try to switch from the mfe.c frontend
> to the newer c++ tmfe frontend (see progs/fetest_tmfe.cxx and progs/fetest_tmfe_thread.cxx).
>
> K.O.
Good evening
I tried to reproduce the behavior in a very simple FE but it did not work out. The next thing for me would be to take the FE that is producing this behavior, replace all the device communication and data with dummies. If the problem is still there I would start to simplify as much as possible.
Following the inputs of KO, I pin-pointed the data loss. The system buffer still gets the data but the mlogger does not write the data event. Then of course the data is also not anymore present in the data file. Therefore, I checked the logger settings again, Event ID and Trigger Mask still -1. Nothing else, at least from my point of view, that is misconfigured. Nevertheless, if it helps I can send my ODB settings.
When doing the tests just before I found something else that probably can give a hint to the problem. The data is only lost if the time between two runs is long (a few seconds). As an example: If I run a sequence with a loop and after the FE stops the run the loop ends and the next run is started automatically, then only the first run has no data, which is the one after a longer time of no data taking. When I add a "WAIT Seconds 5" after the run before starting the next, not data is written to the disk for any run. I also found this once when adding a sleep(1) at the end of the FE readout function but back then did not think about it any further.
Best, Ivo |
|