Back Midas Rome Roody Rootana
  Midas DAQ System, Page 2 of 142  Not logged in ELOG logo
ID Date Author Topic Subject
  2840   13 Sep 2024 Konstantin OlchanskiBug Fixrootana bitbucket build fixed
rootana bitbucket build is fixed, only a few minor build problems. I am using the 
root official docker image (which turned out to not work right out of the box 
becuase of missing libvdt-dev package). K.O.
  2839   12 Sep 2024 Konstantin OlchanskiBug Fixbitbucket builds repaired
bitbucket builds work again, also added ubuntu-24 and almalinux-9.

two problems fixed:
- cmake file in examples/experiment was replaced by a non-working version
- unannounced change of strlcpy() to mstrlcpy() broke "make remoteonly"

P.S. I should also fix the rootana and the roody bitbucket builds.

K.O.
  2838   11 Sep 2024 Konstantin OlchanskiForum"Safe" abort of sequencer scripts
> We often use the MIDAS sequencer to temporarily control detector settings, such as:
> 
> * <change some setting>
> * WAIT 60 seconds
> * <revert setting to original value>
> 
> The question arises of what happens if the sequencer scripts gets aborted during that wait, preventing the value from being reset.

Common problem. Go have an elegant solution using the "defer" keyword.

https://go.dev/tour/flowcontrol/12

K.O.
  2837   11 Sep 2024 Konstantin OlchanskiInfomana.cxx
> Ok, no relevant complains so far, so I removed mana and rmana from the CMake build 
> process, but left the file mana.cxx still in the repository for educational 
> purposes ;-)

+1

K.O.
  2836   11 Sep 2024 Konstantin OlchanskiInfoHelp parsing scdms_v1 data?
Look at the C++ implementation of the MIDAS data file reader, the code is very 
simple to follow.

Depending on how old are your data files, you may run into a problem with 
misaligned 32-bit data banks. Latest MIDAS creates BANK32A events where all 
banks are aligned to 64 bits. old BANK32 format had banks alternating between 
aligned and misaligned. old 16-bit BANK format data hopefully you do not have.

If you successfully make a data format description file for MIDAS, please post 
it here for the next user.

K.O.



[quote="Adrian Fisher"]Hi! I'm working on creating a ksy file to help with 
parsing some data, but I'm having trouble finding some information. Right now, I 
have it set up very rudimentary - it grabs the event header and then uses the 
data bank size to grab the size of the data, but then I'm needing additional 
padding after the data bank to reach the next event.
However, there's some irregularity in the "padding" between data banks that I 
haven't been able to find any documentation for. For some reason, after the data 
banks, there's sections of data of either 168 or 192 bytes, and it's seemingly 
arbitrary which size is used. 
I'm just wondering if anyone has any information about this so that I'd be able 
to make some more progress in parsing the data.
The data I'm working with can be found at https://github.com/det-
lab/dataReaderWriter/blob/master/data/07180808_1735_F0001.mid.gz
And the ksy file that I've created so far is at https://github.com/det-
lab/dataReaderWriter/blob/master/kaitai/ksy/scdms_v1.ksy

There's also a block of data after the odb that runs for 384 bytes that I'm 
unsure the purpose of, if anyone could point me to some information about that.

Thank you![/quote]
  2835   11 Sep 2024 Konstantin OlchanskiSuggestionImprove Event Documentation
> I am writing a Rust based midas file reader however it was kind of hard to understand the full midas file 
> structure from the documentation.

MIDAS is old-school, when the code was the documentation.

This is very noticeable when you try to document things MIDAS (as I have done many times).

For MIDAS data format, file level and bank level, best if you look at my midasio library (included with MIDAS 
git clone) and translate it to Rust directly. I think a Rust version of C++ midasio would be very welcome.

Many data fields in MIDAS files are mysterious and I reverse-engineered them the best I could.

The main problems were:
- data padding
- "length" fields include padding or not?
- identification of big-endian vs little-endian data
- probably something I forget

K.O.
  2834   11 Sep 2024 Konstantin OlchanskiBug ReportMultiple issues with mhist
I think I can offer some insight into your problems:

1) your mhist crash is due to the ODB timeout, it is probably set to 30 seconds in ODB /programs/mhist. you will 
have to make it biigger.

2) 1.5 years of files. yes. I have 10 years of files for ALPHA at CERN. and the number of files is a problem. 
But it should be better than the old system with 3 files per day (1000 files per year).

One solution you can try is symlinks. Assuming you have 10 years of history files in 10 per-year directory, you 
symlink as many of them as you need into the "current" directory, then remove the symlinks.

Why remove the symlinks? I use "ls" to read the list of history files and Unix/Linux does not have a syscall to 
"give me the 100 files with the newest mtime". I have to read the whole directory and that takes forever (if ZFS 
on HDD), it is quick with ZFS on SSD if ZFS cache is hot (you can have a cron job do "ls" every 5 minutes to 
keep the ZFS cache hot).

Now that I wrote the above, I think I see a way to make it "automatic", let me ponder this. (plus I always 
wanted to implement compressed history files (using "free" lz4)).

K.O.



I am having some trouble with mhist. I suppose that the problems are at least partially due to our specific 
needs which might exceed what has been tested. For context, in MEG II we have some 10^4 history variables in ~30 
different events. 

1. mhist -l crashes. After displaying around 7000 lines, I get the following error message:
[CODE]
[mhist,ERROR] [midas.cxx:5949:bm_validate_client_index,ERROR] My client index 10 in buffer 'SYSMSG' 
is invalid: client name '', pid 0 should be my pid 3773321
[mhist,ERROR] [midas.cxx:5952:bm_validate_client_index,ERROR] Maybe this client was removed by a 
timeout. See midas.log. Cannot continue, aborting...
Aborted (core dumped)
[/CODE]
Timing the execution shows around 33 seconds before the process is aborted.

I'm not sure if this would actually fix the problem, but while trying to circumvent the issue, I tried the 
following: [CODE]mhist -e "Xenon" -l[/CODE] This doesn't seem to be implemented. Listing only the variables of a 
single event would be nice 
regardless of our specific issue.

2. mhist and history files.
We have a directory directory with about 2500 history files (mhf_...dat) for the past 1.5 years. Older 
history files are archived in other directories with similar numbers of files. When trying to access them, I 
encountered two issues:
It seems like it is not possible to pass a "history directory" as an argument. To dump the history for a full 
year in the archive directory, I would need to run mhist many times with -f and then combine all the dumps.

If it really does not work, please consider this a feature request.
Also, even using single files does not work at the moment:
[CODE]
$ mhist -e "Xenon" -v "Det XeTmp 0-0" -t 100000 -s 200101 -p 250101 -f 
/data2/history/2022/mhf_1644698398_20220212_xenon.dat
ID 980316009, Aug 13 19:10:56, size 1851749486
[/CODE]
This command was supposed to show me the rough time frame covered in this particular history file. I was 
informed that the history files are in the new "FILE" format and mhist might not work with them properly.

tl;dr
[LIST]
[*] Bug: mhist -l crashes
[*] Bug: mhist -f does not work with "FILE" history format
[*] Feature request: mhist -e "Name" -l to only show variables of event "Name"
[*] Feature request: Set temporary history dir with a flag
[/LIST]

Lukas[/quote]
  2833   11 Sep 2024 Konstantin OlchanskiForumPython frontend rate limitations?
> > I'm trying to get a sense of the rate limitations of a python frontend.

forgot one more:

c++ toolchain comes with extensive profiler tools aimed to answer the question "why is my 
program so slow, where is it spending all the time?". some of these tools go all the way to 
the hardware level and report CPU cache misses, TLB flushes, context switches and any other 
hardware events that interrupt or slow down computations. programmer than uses this 
information to restructure the code to avoid the worst slow downs (i.e. avoid branch mis-
predictions, avoid cache misses, etc).

I doubt the python toolchain will ever profiler tools as good.

K.O.
  2832   11 Sep 2024 Konstantin OlchanskiForumPython frontend rate limitations?
> 
> poll(INT count) {
>    for (i=0 ; i<count ; i++)
>       if (new_event())
>          return TRUE;
>    return FALSE;
> }

in the c++ frontend (tmfe.h) this loop usually runs in a separate thread, and I am now working on the linux magic to assign this thread maximum 
uninterruptible priority. otherwise on my Cyclone-V FPGA SoC I see 1-10 msec dropouts, I think from taking ethernet interrupts.

K.O.
  2831   11 Sep 2024 Konstantin OlchanskiForumPython frontend rate limitations?
> I'm trying to get a sense of the rate limitations of a python frontend.

1) python is single-threaded, for ultimate performance, a MIDAS frontend (or any DAQ 
application) has to be multithreaded:
a) thread with busy loop read the data and place it into a FIFO
b) thread to read data from FIFO and send it to SYSTEM buffer shared memory or to 
mserver
c) thread to respond to begin-run, end-run, etc RPCs
d) probably a thread to recycle memory from thread (b) back to thread (a) if per-event 
malloc()/free() adds too much overhead

2) data readout. C++ AXI bus access is compiled into 1 instruction and results in 1 AXI 
bus operation. comparable for python likely has much more overhead, slows you down.

3) event bank filling. C++ for() loop is compiled into very compact machine code, 
python loop cannot because each array element can be random data type, shows you down.

bottom line, there is a reason high speed data acquisitions are written in C/C++, not 
in shell, perl, tcl/tk, or (today's favourite) python.

> The C++ frontend is about 100 times faster in both data and event rates.

This is as expected. You can probably improve python code to get closer to 10 times 
slower than C++. But consider:

a) will it be "fast enough" for the task?
b) learning C++ and optimizing python to within "2-3-10x slower than C++" may involve a 
similar amount of time and effort.

And you have not looked at the real-time properties of your frontend. You may discover 
that it's actually faster than you think, but occasionally stops for a millisecond (or 
two or hundred). some applications a notorious for running memory garbage collection 
just at the wrong time.

I am working right now on exactly this problem, I have a 1 GHz ARM CPU (Cyclone-V FPGA) 
and I need to push data out at 100 Mbytes/sec while avoiding and bad-real-time dropouts  
that cause the FPGA data FIFO to overflow. And I only have 2 CPU cores, 1 to read the 
FPGA FIFO, 1 to run the TCP/IP stack and the ethernet driver. No this can be done with 
python.

K.O.
  2830   11 Sep 2024 Konstantin OlchanskiInfoNews MSCB++ API
> Here is some example code:
> 
> #include "mscbxx.h"
>    f = m["In0"];     // name access
>    m["In0"] = 1.234;
> Any feedback is welcome.

Where is the example of error handling?

K.O.
  2829   06 Sep 2024 Jack CarltonForumPython frontend rate limitations?
Thanks for the responses, they were very helpful.

>First the general advice: if you reduce the "period" of your equipment, then your function will get called more frequently. You can set it to 0 and we'll 
call it as often as possible.

Thanks, this solves the event rate limitation I described. I didn't think to change this because the "period" did not affect the observed rate in C (and now 
I know why thanks to Stefan).

A couple more questions:

1. 
For me, 
python -m timeit -s "import struct;import ctypes;arr = [0]*1250001;buf = ctypes.create_string_buffer(10000000);fmt = \">1250000d\"" "struct.pack_into(fmt, 
buf, *arr)"
10 loops, best of 3: 43.7 msec per loop

which suggests my maximum data rate is about 1.25 MB * 1000/43.7 Hz = 23 MB/s (?). But I see data rates up to 60 MB/s with a python frontend. Am I 
misinterpreting the meaning of this result?


2. I can effectively bypass the rate limitations in python by running two concurrent frontends. For example, with one python frontend at best I can generate 
60 MB/s of data (setting "period" to 0 now); but with two frontends I can double this to 120 MB/s. This implies one python frontend is not bottlenecked by 
hardware limitations in my case.

Am I doing something wrong to artificially bottleneck my frontends? Perhaps there's a multi-threading solution I can implement to avoid needing multiple 
frontends?


Thanks,
Jack
  2828   05 Sep 2024 Stefan RittForumPython frontend rate limitations?
> First the general advice: if you reduce the "period" of your equipment, then your function will get called more frequently. 
> You can set it to 0 and we'll call it as often as possible. You can set this in the ODB at "/Equipment/Python Data Simulator/Common/Period"

Just for your general understanding: The "period" i the C framework works differently. It calls the poll function with a number, 
and then that number is used in the poll function like (simplified):

poll(INT count) {
   for (i=0 ; i<count ; i++)
      if (new_event())
         return TRUE;
   return FALSE;
}

This ensures that polling is done as quickly as possible, even staying in the same function (poll) rather than called from the 
framework in a loop (which would require a function call to poll each time). The "count" is determined from the framework
during startup of the framework such that the execution time of the poll() routine equals the "period". Like if the period 
is 0.1, the count might be a few millions, so that the poll routine returns immediately when a new event occurs or when
100ms have expired. During the polling the frontend is "dead" meaning it cannot react on run transitions for example. That's
why most experiments use 0.1-0.5 seconds. But this does then NOT mean that you can only have 10-2 events per second, but that
the reaction time if the frontend is at maximum 0.1-0.5 seconds which is acceptable most of the case. 

Due to this design, the C frontend is capable of producing millions of events per second. It took me some while in the early 1990's
to work out that scheme sitting in the "R" trailer at TRIUMF (old guys will remember...).

Best,
Stefan
  Draft   05 Sep 2024 Jack CarltonForumPython frontend rate limitations?
Thank you, this was very helpful.

> First the general advice: if you reduce the "period" of your equipment, then your function will get called more frequently. You can set it to 0 and we'll call it as often as possible. You can set this in the ODB at "/Equipment/Python Data Simulator/Common/Period"

Thanks, I thought that was just for periodic triggering (or at least that's how I've used it in C++ frontends). Changing this allowed me to get past the 100Hz event rate cap I described.

> If that's still not fast enough, then you can return a *list* of events from your readout_func. I've seen real-world cases of 25kHz+ of midas events generated in this fashion.
> 
> 
> However in your case the limitation is likely that you're sending 1.25MB per event and we have a lot of data marshalling to do between the python and C++ layer. In particular it takes 15ms on my machine to just pack the data into a memory buffer (see timeit command below). I am sure there must be a faster way to do this packing, especially in the case where the bank contains a numpy array rather than a python list.
> 
> I'll add it to my to-do list to investigate improving the performance of medium-to-large events in the python code.
> 
> 
> Cheers,
> Ben


> P.S. You may have a bug in your calculations (depending on how you did your testing). In poll_func I think you should be updating the stats every time the function is called, not just the times when you return True.
I had tested the way you described at first, then later changed

> P.P.S. Command I used to test how slow it is to pack the data. One-time setup of creating the buffers, then multiple tests of the pack_into function:
> 
> python -m timeit -s "import struct;import ctypes;arr = [0]*1250001;buf = ctypes.create_string_buffer(10000000);fmt = \">1250000d\"" "struct.pack_into(fmt, buf, *arr)"
> 20 loops, best of 5: 15.3 msec per loop
  2826   05 Sep 2024 Ben SmithForumPython frontend rate limitations?
> What limits the rate that poll_func is called in a python frontend? 

First the general advice: if you reduce the "period" of your equipment, then your function will get called more frequently. You can set it to 0 and we'll call it as often as possible. You can set this in the ODB at "/Equipment/Python Data Simulator/Common/Period"

If that's still not fast enough, then you can return a *list* of events from your readout_func. I've seen real-world cases of 25kHz+ of midas events generated in this fashion.


However in your case the limitation is likely that you're sending 1.25MB per event and we have a lot of data marshalling to do between the python and C++ layer. In particular it takes 15ms on my machine to just pack the data into a memory buffer (see timeit command below). I am sure there must be a faster way to do this packing, especially in the case where the bank contains a numpy array rather than a python list.

I'll add it to my to-do list to investigate improving the performance of medium-to-large events in the python code.


Cheers,
Ben


P.S. You may have a bug in your calculations (depending on how you did your testing). In poll_func I think you should be updating the stats every time the function is called, not just the times when you return True.


P.P.S. Command I used to test how slow it is to pack the data. One-time setup of creating the buffers, then multiple tests of the pack_into function:

python -m timeit -s "import struct;import ctypes;arr = [0]*1250001;buf = ctypes.create_string_buffer(10000000);fmt = \">1250000d\"" "struct.pack_into(fmt, buf, *arr)"
20 loops, best of 5: 15.3 msec per loop
  2825   05 Sep 2024 Jack CarltonForumPython frontend rate limitations?
I'm trying to get a sense of the rate limitations of a python frontend. I 
understand this will vary from system to system.

I adapted two frontends from the example templates, one in C++ and one in python. 
Both simply fill a midas bank with a fixed length array of zeros at a given polled 
rate. However, the C++ frontend is about 100 times faster in both data and event 
rates. This seems slow, even for an interpreted language like python. Furthermore, 
I can effectively increase the maximum rate by concurrently running a second 
python frontend (this is not the case for the C++ frontend). In short, there is 
some limitation with using python here unrelated to hardware.

In my case, poll_func appears to be called at 100Hz at best. What limits the rate 
that poll_func is called in a python frontend? Is there a more appropriate 
solution for increasing the python frontend data/event rate than simply launching 
more frontends?

I've attached my C++ and python frontend files for reference.

Thanks,
Jack
Attachment 1: frontend.py
import midas
import midas.frontend
import midas.event
import numpy as np
import random
import time

class DataSimulatorEquipment(midas.frontend.EquipmentBase):
    def __init__(self, client, frontend):
        equip_name = "Python Data Simulator"
        default_common = midas.frontend.InitialEquipmentCommon()
        default_common.equip_type = midas.EQ_POLLED
        default_common.buffer_name = "SYSTEM"
        default_common.trigger_mask = 0
        default_common.event_id = 2
        default_common.period_ms = 100
        default_common.read_when = midas.RO_RUNNING
        default_common.log_history = 1
        
        midas.frontend.EquipmentBase.__init__(self, client, equip_name, default_common)
        print("Initialization complete")
        self.set_status("Initialized")

        self.frontend = frontend

    def readout_func(self):
        event = midas.event.Event()
        
        # Create a bank for zero buffer
        event.create_bank("CR00", midas.TID_SHORT, self.frontend.zero_buffer)
        
        # Simulate the addition of `data` in the periodic event
        '''
        data_block = []
        data_block.extend(self.frontend.data)
        
        
        # Append the simulated data to the event
        event.create_bank("CR00", midas.TID_SHORT, data_block)
        '''

        return event
    
    def poll_func(self):
        current_time = time.time()
        if current_time - self.frontend.last_poll_time >= self.frontend.poll_time:
            self.frontend.last_poll_time = current_time
            self.frontend.poll_count += 1
            self.frontend.poll_timestamps.append(current_time)
            return True  # Indicate that an event is available
        return False  # No event available yet

class DataSimulatorFrontend(midas.frontend.FrontendBase):
    def __init__(self):
        midas.frontend.FrontendBase.__init__(self, "DataSimulator-Python")
        
        # Data and zero buffer initialization
        self.data = []
        self.zero_buffer = []
        self.generator = random.Random()
        self.total_data_size = 1250000
        self.load_data_from_file("fake_data.txt")
        self.init_zero_buffer()

        # Polling variables
        self.poll_time = 0.001  # Poll time in seconds
        self.last_poll_time = time.time()
        self.poll_count = 0
        self.poll_timestamps = []

        self.add_equipment(DataSimulatorEquipment(self.client, self))

    def load_data_from_file(self, filename):
        try:
            with open(filename, 'r') as file:
                for line in file:
                    values = [int(value) for value in line.strip().split(',')]
                    self.data.extend(values)
            print(f"Loaded data from {filename}: {self.data[:10]}...")  # Display the first few values for verification
        except IOError as e:
            print(f"Error opening file: {e}")

    def init_zero_buffer(self):
        self.zero_buffer = [0] * self.total_data_size 
        print(f"Initialized zero buffer with {self.total_data_size } zeros.")

    def begin_of_run(self, run_number):
        self.set_all_equipment_status("Running", "greenLight")
        self.client.msg(f"Frontend has started run number {run_number}")
        return midas.status_codes["SUCCESS"]

    def end_of_run(self, run_number):
        self.set_all_equipment_status("Finished", "greenLight")
        self.client.msg(f"Frontend has ended run number {run_number}")
        
        # Print poll function statistics at the end of the run
        self.print_poll_stats()

        return midas.status_codes["SUCCESS"]

    def frontend_exit(self):
        print("Frontend is exiting.")

    def print_poll_stats(self):
        if len(self.poll_timestamps) > 1:
            intervals = [self.poll_timestamps[i] - self.poll_timestamps[i-1] for i in range(1, len(self.poll_timestamps))]
            avg_interval = sum(intervals) / len(intervals)
            print(f"Poll function was called {self.poll_count} times.")
            print(f"Average interval between poll calls: {avg_interval:.6f} seconds")
        else:
            print(f"Poll function was called {self.poll_count} times. Not enough data for interval calculation.")

if __name__ == "__main__":
    with DataSimulatorFrontend() as my_fe:
        my_fe.run()
Attachment 2: frontend.cxx
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <string.h>
#include <iostream>
#include <fstream>
#include <sstream>
#include <vector>
#include "midas.h"
#include "mfe.h"
#include <stdlib.h> // Include the header for rand()
#include <random> // Include for random number generation


void trigger_update(INT, INT, void*);

/*-- Globals -------------------------------------------------------*/

/* The frontend name (client name) as seen by other MIDAS clients   */
const char *frontend_name = "DataSimulator";
/* The frontend file name, don't change it */
const char *frontend_file_name = __FILE__;

/* frontend_loop is called periodically if this variable is TRUE    */
BOOL frontend_call_loop = FALSE;

/* a frontend status page is displayed with this frequency in ms */
INT display_period = 1000;

/* maximum event size produced by this frontend */
INT max_event_size = 1024 * 1014;

/* maximum event size for fragmented events (EQ_FRAGMENTED) */
INT max_event_size_frag = 5 * max_event_size;

/* buffer size to hold events */
INT event_buffer_size = 5 * max_event_size;

// Define a vector to store 16-bit words
std::vector<int16_t> data; // Define a global vector to store 16-bit signed integers

// Global variable to keep track of the last poll time
std::chrono::steady_clock::time_point last_poll_time;
const std::chrono::microseconds polling_interval(300); // Poll every 300 microsecond

// Random number generator for generating data
std::mt19937 generator;
std::uniform_int_distribution<short> distribution(-32768, 32767); // Define the range of random values (short range)

// Global variable to hold the zero buffer
std::vector<short> zero_buffer;


/*-- Function declarations -----------------------------------------*/

INT frontend_init(void);
INT frontend_exit(void);
INT begin_of_run(INT run_number, char *error);
INT end_of_run(INT run_number, char *error);
INT pause_run(INT run_number, char *error);
INT resume_run(INT run_number, char *error);
INT frontend_loop(void);

INT read_trigger_event(char *pevent, INT off);
INT read_periodic_event(char *pevent, INT off);

INT poll_event(INT source, INT count, BOOL test);
INT interrupt_configure(INT cmd, INT source, POINTER_T adr);

/*-- Equipment list ------------------------------------------------*/

BOOL equipment_common_overwrite = TRUE;

EQUIPMENT equipment[] = {
   {"Data Simulator",              /* equipment name */
      {2, 0,                 /* event ID, trigger mask */
         "SYSTEM",           /* event buffer */
         EQ_POLLED,        /* equipment type */
         0,                  /* event source */
         "MIDAS",            /* format */
         TRUE,               /* enabled */
         RO_RUNNING | RO_TRANSITIONS |   /* read when running and on transitions */
         RO_ODB,             /* and update ODB */
         10,               /* read every sec */
         0,                  /* stop run after this event limit */
         0,                  /* number of sub events */
         TRUE,               /* log history */
         "", "", "",},
      read_trigger_event   /* readout routine */
   },

   {""}
};

/*-- Trigger Update ------------------------------------------------*/

void trigger_update(INT hDB, INT hkey,void*)
{

}


/*-- Frontend Init -------------------------------------------------*/

int frontend_init() {
    // Open the file for reading
    std::ifstream inputFile("fake_data.txt");

    if (!inputFile) {
        std::cerr << "Error opening the file." << std::endl;
        return 1;
    }

    std::cout << "Reading and converting data:" << std::endl;

    std::string line;
    while (std::getline(inputFile, line)) {
        std::istringstream iss(line);
        std::string token;

        while (std::getline(iss, token, ',')) {
            int16_t value;
            std::istringstream(token) >> value;
            data.push_back(value);
        }
    }

    // Print the converted data
    for (int i = 0; i < data.size(); i++) {
        std::cout << " " << data[i];
    }

    // Close the file
    inputFile.close();

    if (data.empty()) {
        std::cerr << "No data was converted." << std::endl;
    } else {
        std::cout << std::endl << "Conversion completed." << std::endl;
    }

    // Initialize random number generator
    std::random_device rd; // Obtain a random number from hardware
    generator = std::mt19937(rd()); // Seed the generator

    // Define the total number of zero data points
    const int total_data_size = 50000; // Adjust size as needed

    // Create and initialize the buffer of zeros
    zero_buffer.resize(total_data_size, 0);

    return SUCCESS;
}




/*-- Frontend Exit -------------------------------------------------*/

INT frontend_exit()
{
   return SUCCESS;
}

/*-- Begin of Run --------------------------------------------------*/

INT begin_of_run(INT run_number, char *error)
{
   return SUCCESS;
}

/*-- End of Run ----------------------------------------------------*/

INT end_of_run(INT run_number, char *error)
{
   return SUCCESS;
}

/*-- Pause Run -----------------------------------------------------*/

INT pause_run(INT run_number, char *error)
{
   return SUCCESS;
}

/*-- Resume Run ----------------------------------------------------*/

INT resume_run(INT run_number, char *error)
{
   return SUCCESS;
}

/*-- Frontend Loop -------------------------------------------------*/

INT frontend_loop()
{
   /* if frontend_call_loop is true, this routine gets called when
      the frontend is idle or once between every event */
   return SUCCESS;
}

/*------------------------------------------------------------------*/

/********************************************************************\

  Readout routines for different events

\********************************************************************/

/*-- Trigger event routines ----------------------------------------*/

INT poll_event(INT source, INT count, BOOL test) {
    // Get the current time
    auto now = std::chrono::steady_clock::now();
    
    // Check if enough time has passed since the last poll
    if (now - last_poll_time >= polling_interval) {
        // Update the last poll time
        last_poll_time = now;
        
        // Return TRUE to indicate that an event is available
        return TRUE;
    }
    
    // If test is TRUE, don't return anything
    if (test) {
        return FALSE;
    }
    
    // Otherwise, return FALSE to indicate no event available
    return FALSE;
}

/*-- Interrupt configuration ---------------------------------------*/

INT interrupt_configure(INT cmd, INT source, POINTER_T adr)
{
   switch (cmd) {
   case CMD_INTERRUPT_ENABLE:
      break;
   case CMD_INTERRUPT_DISABLE:
      break;
   case CMD_INTERRUPT_ATTACH:
      break;
   case CMD_INTERRUPT_DETACH:
      break;
   }
   return SUCCESS;
}

/*-- Event readout -------------------------------------------------*/

INT read_trigger_event(char *pevent, INT off)
{
    short *pdata;

    // Init bank structure
    bk_init32(pevent);

    // Create a bank named "CR00" and specify the data type as TID_SHORT
    bk_create(pevent, "CR00", TID_SHORT, (void **)&pdata);

    // Use memcpy to copy the buffer of zeros into the MIDAS bank
    memcpy(pdata, zero_buffer.data(), zero_buffer.size() * sizeof(short));

    // Adjust pdata pointer
    pdata += zero_buffer.size();  // Move the pointer past the copied data

    // Close the bank
    bk_close(pevent, pdata);

    return bk_size(pevent);
}

/*-- Periodic event ------------------------------------------------*/

INT read_periodic_event(char *pevent, INT off)
{
   short *pdata; // Change the data type to short

   // Init bank structure
   bk_init32(pevent);

   // Create a bank named "CR00" and specify the data type as TID_SHORT
   bk_create(pevent, "CR00", TID_SHORT, (void **)&pdata);

    // Repeat the loop 5000 times
    for (int repeat = 0; repeat < 400; repeat++) {
        for (int i = 0; i < data.size(); i++) {
            *pdata++ = data[i];
        }
    }

   // Close the bank
   bk_close(pevent, pdata);

   return bk_size(pevent);
}
  2824   04 Sep 2024 Stefan RittInfoNews MSCB++ API
I had two free afternoon and took the opportunity to write a new API for the MSCB 
system. I'm not sure if anybody else actually uses MSCB (MIDAS slow control bus), 
but anyhow. 

The new API is contained in a single header file mscbxx.h, and it's extremely 
simple to use. Here is some example code:

#include "mscbxx.h"

...
   // connect to node 10 at submaster mscb123
   midas::mscb m("mscb123", 10);

   // print node info and all variables
   std::cout << m << std::endl;

   // refresh all variables (read from MSCB device)
   m.read_range();
   
   // access individual variables
   float f = m[5];   // index access
   f = m["In0"];     // name access

   // write value to MSCB device
   m["In0"] = 1.234;
...


Any feedback is welcome.

Stefan
  2823   04 Sep 2024 Zaher SalmanBug ReportParams not initialized when starting sequencer
The problem here was that the JS code did not wait to msequencer to finish preparing the "/Sequencer/Param" in the ODB, so I had to change to code to wait for "/Sequencer/Command/Load new file" to be false before proceeding.

As for your problem I recommend that you handle in the following way:

mjsonrpc_db_paste(paths,values).then(function (rpc) {
if (rpc.result.status.every(status => status === 1) {
// do something
} else {
// failed to set values, do something else
}
}).catch(function (error) {
console.error(error);
});

alternatively (for a single ODB) you can use the checkODBValue() function in sequencer.js. This function monitors a specific ODB path until it reaches a specific value and then calls funcCall with args.

var NcheckValue = 0;
// What for ODB in path to have value
// If value is not reached, give up after 10s
function checkODBValue(path,value,funcCall,args) {
/* Arguments:
path - ODB path to monitor for value
value - the value to be reached and return success
funcCall - function name to call when value is reached
args - argument to pass to funcCall
*/
// Call the mjsonrpc_db_get_values function
mjsonrpc_db_get_values([path]).then(function(rpc) {
if (rpc.result.status[0] === 1 && rpc.result.data[0] !== value) {
console.log("Value not reached yet", NcheckValue);
NcheckValue++;
if (NcheckValue < 100) {
// Wait 0.1 second and then call checkODBValue again
// Time out after 10 s
setTimeout(() => {
checkODBValue(path,value,funcCall,args);
}, 100);
}
} else {
if (funcCall) funcCall(args);
console.log("Value reached, proceeding...");
// reset counter
NcheckValue = 0;
}
}).catch(function(error) {
console.error(error);
});
}



Lukas Gerritzen wrote:
I think I have had similar issues in a custom page, where I wrote values to the ODB and they were not ready when I needed them. If you found a fix to such race conditions, could you maybe share how to properly treat this issue? If the solution reliably works, we could also consider including it in the documentation (midaswiki or example.html).


Zaher Salman wrote:
The issue with the parameters should be fixed now. Please test and let me know if it still happens.
  2822   04 Sep 2024 Lukas GerritzenBug ReportParams not initialized when starting sequencer
I think I have had similar issues in a custom page, where I wrote values to the ODB and they were not ready when I needed them. If you found a fix to such race conditions, could you maybe share how to properly treat this issue? If the solution reliably works, we could also consider including it in the documentation (midaswiki or example.html).


Zaher Salman wrote:
The issue with the parameters should be fixed now. Please test and let me know if it still happens.
  2821   04 Sep 2024 Lukas GerritzenBug ReportMultiple issues with mhist
Hi,

I am having some trouble with mhist. I suppose that the problems are at least partially due to our specific needs which might exceed what has been tested. For context, in MEG II we have some 10^4 history variables in ~30 different events.

1. mhist -l crashes. After displaying around 7000 lines, I get the following error message:
[mhist,ERROR] [midas.cxx:5949:bm_validate_client_index,ERROR] My client index 10 in buffer 'SYSMSG' 
is invalid: client name '', pid 0 should be my pid 3773321
[mhist,ERROR] [midas.cxx:5952:bm_validate_client_index,ERROR] Maybe this client was removed by a 
timeout. See midas.log. Cannot continue, aborting...
Aborted (core dumped)
Timing the execution shows around 33 seconds before the process is aborted.

I'm not sure if this would actually fix the problem, but while trying to circumvent the issue, I tried the
following:
mhist -e "Xenon" -l
This doesn't seem to be implemented. Listing only the variables of a single event would be nice
regardless of our specific issue.

2. mhist and history files.
We have a directory directory with about 2500 history files (mhf_...dat) for the past 1.5 years. Older
history files are archived in other directories with similar numbers of files. When trying to access them, I
encountered two issues:
It seems like it is not possible to pass a "history directory" as an argument. To dump the history for a full
year in the archive directory, I would need to run mhist many times with -f and then combine all the dumps.

If it really does not work, please consider this a feature request.
Also, even using single files does not work at the moment:
$ mhist -e "Xenon" -v "Det XeTmp 0-0" -t 100000 -s 200101 -p 250101 -f 
/data2/history/2022/mhf_1644698398_20220212_xenon.dat
ID 980316009, Aug 13 19:10:56, size 1851749486
This command was supposed to show me the rough time frame covered in this particular history file. I was
informed that the history files are in the new "FILE" format and mhist might not work with them properly.

tl;dr
  • Bug: mhist -l crashes
  • Bug: mhist -f does not work with "FILE" history format
  • Feature request: mhist -e "Name" -l to only show variables of event "Name"
  • Feature request: Set temporary history dir with a flag

Lukas
ELOG V3.1.4-2e1708b5