Back Midas Rome Roody Rootana
  Midas DAQ System, Page 92 of 150  Not logged in ELOG logo
ID Date Author Topicdown Subject
  Draft   04 Jul 2024 Nick HastingsForummfe.cxx with RO_STOPPED and EQ_POLLED
I just discovered https://bitbucket.org/tmidas/midas/issues/338/mfec-ro_stopped-is-now-forbidden

> Dear Midas experts,
> 
> I noticed that a check was added to mfe.cxx in 1961af0d6:
> 
> +      /* check for consistent common settings */
> +      if ((eq_info->read_on & RO_STOPPED) &&
> +          (eq_info->eq_type == EQ_POLLED ||
> +           eq_info->eq_type == EQ_INTERRUPT ||
> +           eq_info->eq_type == EQ_MULTITHREAD ||
> +           eq_info->eq_type == EQ_USER)) {
> +         cm_msg(MERROR, "register_equipment", "Events \"%s\" cannot be read when run is stopped (RO_STOPPED flag)", equipment[idx].name);
> +         return 0;
> +      }
> 
> This commit was by Stefan in May 2022.
> 
> A commit few days later, 28d9c96bd, removed the "return 0;", and updated the
> error message to:
> 
> "Equipment \"%s\" contains RO_STOPPED or RO_ALWAYS. This can lead to undesired side-effect and should be removed."
> 
> So such FEs can run but there is still an error at start up. The 
> documentation at https://daq00.triumf.ca/MidasWiki/index.php/ReadOn_Flags
> states with RO_STOPPED "Readout Occurs" "Before stopping run".
> Which seems to indicate that the removing the RO_STOPPED bit from a SC FE
> would just result in an additional read not happening just prior to a run
> stop. However reading scheduler() in mfe.cxx I see in the the main loop:
> 
>  if (run_state == STATE_STOPPED && (eq_info->read_on & RO_STOPPED) == 0) 
>     continue;
> 
> So it seems to me that the a EQ_PERIODIC equipment needs RO_STOPPED to be set
> otherwise it will not read out data while there is no DAQ run.
> 
> Can someone explain the purpose of this check and error message? Perhaps it
> was put in place with only DAQ FEs, not SC FEs in mind? And should the 
> documentation in the wiki actually be "s/Before stopping run/While run is stopped/"?
> 
> Thanks,
> 
> Nick.
  2796   06 Aug 2024 Stefan RittForummfe.cxx with RO_STOPPED and EQ_POLLED
> I noticed that a check was added to mfe.cxx in 1961af0d6:
> 
> +      /* check for consistent common settings */
> +      if ((eq_info->read_on & RO_STOPPED) &&
> +          (eq_info->eq_type == EQ_POLLED ||
> +           eq_info->eq_type == EQ_INTERRUPT ||
> +           eq_info->eq_type == EQ_MULTITHREAD ||
> +           eq_info->eq_type == EQ_USER)) {
> +         cm_msg(MERROR, "register_equipment", "Events \"%s\" cannot be read when run is stopped (RO_STOPPED flag)", equipment[idx].name);
> +         return 0;
> +      }
> 
> 
> Can someone explain the purpose of this check and error message? Perhaps it
> was put in place with only DAQ FEs, not SC FEs in mind? And should the 
> documentation in the wiki actually be "s/Before stopping run/While run is stopped/"?

Indeed you have two types of events handled by mfe.cxx: Slow control events (EQ_SLOW or EQ_PRIODIC) and triggered events (EQ_POLLED or 
EQ_INTERRUPT or EQ_MULTITHREAD or EQ_USER). For slow control events it can make sense to read them also when the run is stopped, that's why you 
can specify RO_STOPPED or RO_ALWAYS. This does however not make sense for triggered events. Reading triggered events when the run is stopped 
invalidates the concept of runs (= read triggered events only during a run). We had cases where people mixed this up, so the warning was added. 
If you have a slow control event you want to read when the run is stopped, make sure it is of type EQ_SLOW or EQ_PERIODIC.

Stefan
  2804   15 Aug 2024 Scott OserForum"Safe" abort of sequencer scripts
We often use the MIDAS sequencer to temporarily control detector settings, such as:

* <change some setting>
* WAIT 60 seconds
* <revert setting to original value>

The question arises of what happens if the sequencer scripts gets aborted during that wait, preventing the value from being reset.  Depending on the setting, this could be undesirable or even damage something if left uncorrected for too long.

Is there any way to have a "safe abort" from the sequencer so that the "Stop immediately" button will call some cleanup script to leave things in a safe state?  Or what about if the sequencer process itself gets killed in the middle of a script?

How have other experiments using MIDAS protected themselves from unplanned terminations of sequencer scripts?
  2805   19 Aug 2024 Stefan RittForum"Safe" abort of sequencer scripts
This request came more than once in the past. One thing I could implement is a "atexit" function similarly to the C funciton atexit().

Then we would have a function in the script which gets called whenever one does "stop immediately". This function can then restore
some ODB values or do whatever is necessary. 

If the sequencer gets killed in the middle, it can safely be restarted since the complete sequencer state is kept in the ODB under
/Sequencer/State. After the restart, the sequencer continues exactly where it has been killed before.

Would that solve your problem?

Stefan
  2809   22 Aug 2024 Scott OserForum"Safe" abort of sequencer scripts
> This request came more than once in the past. One thing I could implement is a "atexit" function similarly to the C funciton atexit().
> 
> Then we would have a function in the script which gets called whenever one does "stop immediately". This function can then restore
> some ODB values or do whatever is necessary. 
> 
> If the sequencer gets killed in the middle, it can safely be restarted since the complete sequencer state is kept in the ODB under
> /Sequencer/State. After the restart, the sequencer continues exactly where it has been killed before.
> 
> Would that solve your problem?
> 
> Stefan

Yes, an "atexit" functionality within the Midas Sequencer Language would be useful for us with this issue.  Is this easy for you to implement?

Thanks,
Scott Oser
  2825   05 Sep 2024 Jack CarltonForumPython frontend rate limitations?
I'm trying to get a sense of the rate limitations of a python frontend. I 
understand this will vary from system to system.

I adapted two frontends from the example templates, one in C++ and one in python. 
Both simply fill a midas bank with a fixed length array of zeros at a given polled 
rate. However, the C++ frontend is about 100 times faster in both data and event 
rates. This seems slow, even for an interpreted language like python. Furthermore, 
I can effectively increase the maximum rate by concurrently running a second 
python frontend (this is not the case for the C++ frontend). In short, there is 
some limitation with using python here unrelated to hardware.

In my case, poll_func appears to be called at 100Hz at best. What limits the rate 
that poll_func is called in a python frontend? Is there a more appropriate 
solution for increasing the python frontend data/event rate than simply launching 
more frontends?

I've attached my C++ and python frontend files for reference.

Thanks,
Jack
Attachment 1: frontend.py
import midas
import midas.frontend
import midas.event
import numpy as np
import random
import time

class DataSimulatorEquipment(midas.frontend.EquipmentBase):
    def __init__(self, client, frontend):
        equip_name = "Python Data Simulator"
        default_common = midas.frontend.InitialEquipmentCommon()
        default_common.equip_type = midas.EQ_POLLED
        default_common.buffer_name = "SYSTEM"
        default_common.trigger_mask = 0
        default_common.event_id = 2
        default_common.period_ms = 100
        default_common.read_when = midas.RO_RUNNING
        default_common.log_history = 1
        
        midas.frontend.EquipmentBase.__init__(self, client, equip_name, default_common)
        print("Initialization complete")
        self.set_status("Initialized")

        self.frontend = frontend

    def readout_func(self):
        event = midas.event.Event()
        
        # Create a bank for zero buffer
        event.create_bank("CR00", midas.TID_SHORT, self.frontend.zero_buffer)
        
        # Simulate the addition of `data` in the periodic event
        '''
        data_block = []
        data_block.extend(self.frontend.data)
        
        
        # Append the simulated data to the event
        event.create_bank("CR00", midas.TID_SHORT, data_block)
        '''

        return event
    
    def poll_func(self):
        current_time = time.time()
        if current_time - self.frontend.last_poll_time >= self.frontend.poll_time:
            self.frontend.last_poll_time = current_time
            self.frontend.poll_count += 1
            self.frontend.poll_timestamps.append(current_time)
            return True  # Indicate that an event is available
        return False  # No event available yet

class DataSimulatorFrontend(midas.frontend.FrontendBase):
    def __init__(self):
        midas.frontend.FrontendBase.__init__(self, "DataSimulator-Python")
        
        # Data and zero buffer initialization
        self.data = []
        self.zero_buffer = []
        self.generator = random.Random()
        self.total_data_size = 1250000
        self.load_data_from_file("fake_data.txt")
        self.init_zero_buffer()

        # Polling variables
        self.poll_time = 0.001  # Poll time in seconds
        self.last_poll_time = time.time()
        self.poll_count = 0
        self.poll_timestamps = []

        self.add_equipment(DataSimulatorEquipment(self.client, self))

    def load_data_from_file(self, filename):
        try:
            with open(filename, 'r') as file:
                for line in file:
                    values = [int(value) for value in line.strip().split(',')]
                    self.data.extend(values)
            print(f"Loaded data from {filename}: {self.data[:10]}...")  # Display the first few values for verification
        except IOError as e:
            print(f"Error opening file: {e}")

    def init_zero_buffer(self):
        self.zero_buffer = [0] * self.total_data_size 
        print(f"Initialized zero buffer with {self.total_data_size } zeros.")

    def begin_of_run(self, run_number):
        self.set_all_equipment_status("Running", "greenLight")
        self.client.msg(f"Frontend has started run number {run_number}")
        return midas.status_codes["SUCCESS"]

    def end_of_run(self, run_number):
        self.set_all_equipment_status("Finished", "greenLight")
        self.client.msg(f"Frontend has ended run number {run_number}")
        
        # Print poll function statistics at the end of the run
        self.print_poll_stats()

        return midas.status_codes["SUCCESS"]

    def frontend_exit(self):
        print("Frontend is exiting.")

    def print_poll_stats(self):
        if len(self.poll_timestamps) > 1:
            intervals = [self.poll_timestamps[i] - self.poll_timestamps[i-1] for i in range(1, len(self.poll_timestamps))]
            avg_interval = sum(intervals) / len(intervals)
            print(f"Poll function was called {self.poll_count} times.")
            print(f"Average interval between poll calls: {avg_interval:.6f} seconds")
        else:
            print(f"Poll function was called {self.poll_count} times. Not enough data for interval calculation.")

if __name__ == "__main__":
    with DataSimulatorFrontend() as my_fe:
        my_fe.run()
Attachment 2: frontend.cxx
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <string.h>
#include <iostream>
#include <fstream>
#include <sstream>
#include <vector>
#include "midas.h"
#include "mfe.h"
#include <stdlib.h> // Include the header for rand()
#include <random> // Include for random number generation


void trigger_update(INT, INT, void*);

/*-- Globals -------------------------------------------------------*/

/* The frontend name (client name) as seen by other MIDAS clients   */
const char *frontend_name = "DataSimulator";
/* The frontend file name, don't change it */
const char *frontend_file_name = __FILE__;

/* frontend_loop is called periodically if this variable is TRUE    */
BOOL frontend_call_loop = FALSE;

/* a frontend status page is displayed with this frequency in ms */
INT display_period = 1000;

/* maximum event size produced by this frontend */
INT max_event_size = 1024 * 1014;

/* maximum event size for fragmented events (EQ_FRAGMENTED) */
INT max_event_size_frag = 5 * max_event_size;

/* buffer size to hold events */
INT event_buffer_size = 5 * max_event_size;

// Define a vector to store 16-bit words
std::vector<int16_t> data; // Define a global vector to store 16-bit signed integers

// Global variable to keep track of the last poll time
std::chrono::steady_clock::time_point last_poll_time;
const std::chrono::microseconds polling_interval(300); // Poll every 300 microsecond

// Random number generator for generating data
std::mt19937 generator;
std::uniform_int_distribution<short> distribution(-32768, 32767); // Define the range of random values (short range)

// Global variable to hold the zero buffer
std::vector<short> zero_buffer;


/*-- Function declarations -----------------------------------------*/

INT frontend_init(void);
INT frontend_exit(void);
INT begin_of_run(INT run_number, char *error);
INT end_of_run(INT run_number, char *error);
INT pause_run(INT run_number, char *error);
INT resume_run(INT run_number, char *error);
INT frontend_loop(void);

INT read_trigger_event(char *pevent, INT off);
INT read_periodic_event(char *pevent, INT off);

INT poll_event(INT source, INT count, BOOL test);
INT interrupt_configure(INT cmd, INT source, POINTER_T adr);

/*-- Equipment list ------------------------------------------------*/

BOOL equipment_common_overwrite = TRUE;

EQUIPMENT equipment[] = {
   {"Data Simulator",              /* equipment name */
      {2, 0,                 /* event ID, trigger mask */
         "SYSTEM",           /* event buffer */
         EQ_POLLED,        /* equipment type */
         0,                  /* event source */
         "MIDAS",            /* format */
         TRUE,               /* enabled */
         RO_RUNNING | RO_TRANSITIONS |   /* read when running and on transitions */
         RO_ODB,             /* and update ODB */
         10,               /* read every sec */
         0,                  /* stop run after this event limit */
         0,                  /* number of sub events */
         TRUE,               /* log history */
         "", "", "",},
      read_trigger_event   /* readout routine */
   },

   {""}
};

/*-- Trigger Update ------------------------------------------------*/

void trigger_update(INT hDB, INT hkey,void*)
{

}


/*-- Frontend Init -------------------------------------------------*/

int frontend_init() {
    // Open the file for reading
    std::ifstream inputFile("fake_data.txt");

    if (!inputFile) {
        std::cerr << "Error opening the file." << std::endl;
        return 1;
    }

    std::cout << "Reading and converting data:" << std::endl;

    std::string line;
    while (std::getline(inputFile, line)) {
        std::istringstream iss(line);
        std::string token;

        while (std::getline(iss, token, ',')) {
            int16_t value;
            std::istringstream(token) >> value;
            data.push_back(value);
        }
    }

    // Print the converted data
    for (int i = 0; i < data.size(); i++) {
        std::cout << " " << data[i];
    }

    // Close the file
    inputFile.close();

    if (data.empty()) {
        std::cerr << "No data was converted." << std::endl;
    } else {
        std::cout << std::endl << "Conversion completed." << std::endl;
    }

    // Initialize random number generator
    std::random_device rd; // Obtain a random number from hardware
    generator = std::mt19937(rd()); // Seed the generator

    // Define the total number of zero data points
    const int total_data_size = 50000; // Adjust size as needed

    // Create and initialize the buffer of zeros
    zero_buffer.resize(total_data_size, 0);

    return SUCCESS;
}




/*-- Frontend Exit -------------------------------------------------*/

INT frontend_exit()
{
   return SUCCESS;
}

/*-- Begin of Run --------------------------------------------------*/

INT begin_of_run(INT run_number, char *error)
{
   return SUCCESS;
}

/*-- End of Run ----------------------------------------------------*/

INT end_of_run(INT run_number, char *error)
{
   return SUCCESS;
}

/*-- Pause Run -----------------------------------------------------*/

INT pause_run(INT run_number, char *error)
{
   return SUCCESS;
}

/*-- Resume Run ----------------------------------------------------*/

INT resume_run(INT run_number, char *error)
{
   return SUCCESS;
}

/*-- Frontend Loop -------------------------------------------------*/

INT frontend_loop()
{
   /* if frontend_call_loop is true, this routine gets called when
      the frontend is idle or once between every event */
   return SUCCESS;
}

/*------------------------------------------------------------------*/

/********************************************************************\

  Readout routines for different events

\********************************************************************/

/*-- Trigger event routines ----------------------------------------*/

INT poll_event(INT source, INT count, BOOL test) {
    // Get the current time
    auto now = std::chrono::steady_clock::now();
    
    // Check if enough time has passed since the last poll
    if (now - last_poll_time >= polling_interval) {
        // Update the last poll time
        last_poll_time = now;
        
        // Return TRUE to indicate that an event is available
        return TRUE;
    }
    
    // If test is TRUE, don't return anything
    if (test) {
        return FALSE;
    }
    
    // Otherwise, return FALSE to indicate no event available
    return FALSE;
}

/*-- Interrupt configuration ---------------------------------------*/

INT interrupt_configure(INT cmd, INT source, POINTER_T adr)
{
   switch (cmd) {
   case CMD_INTERRUPT_ENABLE:
      break;
   case CMD_INTERRUPT_DISABLE:
      break;
   case CMD_INTERRUPT_ATTACH:
      break;
   case CMD_INTERRUPT_DETACH:
      break;
   }
   return SUCCESS;
}

/*-- Event readout -------------------------------------------------*/

INT read_trigger_event(char *pevent, INT off)
{
    short *pdata;

    // Init bank structure
    bk_init32(pevent);

    // Create a bank named "CR00" and specify the data type as TID_SHORT
    bk_create(pevent, "CR00", TID_SHORT, (void **)&pdata);

    // Use memcpy to copy the buffer of zeros into the MIDAS bank
    memcpy(pdata, zero_buffer.data(), zero_buffer.size() * sizeof(short));

    // Adjust pdata pointer
    pdata += zero_buffer.size();  // Move the pointer past the copied data

    // Close the bank
    bk_close(pevent, pdata);

    return bk_size(pevent);
}

/*-- Periodic event ------------------------------------------------*/

INT read_periodic_event(char *pevent, INT off)
{
   short *pdata; // Change the data type to short

   // Init bank structure
   bk_init32(pevent);

   // Create a bank named "CR00" and specify the data type as TID_SHORT
   bk_create(pevent, "CR00", TID_SHORT, (void **)&pdata);

    // Repeat the loop 5000 times
    for (int repeat = 0; repeat < 400; repeat++) {
        for (int i = 0; i < data.size(); i++) {
            *pdata++ = data[i];
        }
    }

   // Close the bank
   bk_close(pevent, pdata);

   return bk_size(pevent);
}
  2826   05 Sep 2024 Ben SmithForumPython frontend rate limitations?
> What limits the rate that poll_func is called in a python frontend? 

First the general advice: if you reduce the "period" of your equipment, then your function will get called more frequently. You can set it to 0 and we'll call it as often as possible. You can set this in the ODB at "/Equipment/Python Data Simulator/Common/Period"

If that's still not fast enough, then you can return a *list* of events from your readout_func. I've seen real-world cases of 25kHz+ of midas events generated in this fashion.


However in your case the limitation is likely that you're sending 1.25MB per event and we have a lot of data marshalling to do between the python and C++ layer. In particular it takes 15ms on my machine to just pack the data into a memory buffer (see timeit command below). I am sure there must be a faster way to do this packing, especially in the case where the bank contains a numpy array rather than a python list.

I'll add it to my to-do list to investigate improving the performance of medium-to-large events in the python code.


Cheers,
Ben


P.S. You may have a bug in your calculations (depending on how you did your testing). In poll_func I think you should be updating the stats every time the function is called, not just the times when you return True.


P.P.S. Command I used to test how slow it is to pack the data. One-time setup of creating the buffers, then multiple tests of the pack_into function:

python -m timeit -s "import struct;import ctypes;arr = [0]*1250001;buf = ctypes.create_string_buffer(10000000);fmt = \">1250000d\"" "struct.pack_into(fmt, buf, *arr)"
20 loops, best of 5: 15.3 msec per loop
  Draft   05 Sep 2024 Jack CarltonForumPython frontend rate limitations?
Thank you, this was very helpful.

> First the general advice: if you reduce the "period" of your equipment, then your function will get called more frequently. You can set it to 0 and we'll call it as often as possible. You can set this in the ODB at "/Equipment/Python Data Simulator/Common/Period"

Thanks, I thought that was just for periodic triggering (or at least that's how I've used it in C++ frontends). Changing this allowed me to get past the 100Hz event rate cap I described.

> If that's still not fast enough, then you can return a *list* of events from your readout_func. I've seen real-world cases of 25kHz+ of midas events generated in this fashion.
> 
> 
> However in your case the limitation is likely that you're sending 1.25MB per event and we have a lot of data marshalling to do between the python and C++ layer. In particular it takes 15ms on my machine to just pack the data into a memory buffer (see timeit command below). I am sure there must be a faster way to do this packing, especially in the case where the bank contains a numpy array rather than a python list.
> 
> I'll add it to my to-do list to investigate improving the performance of medium-to-large events in the python code.
> 
> 
> Cheers,
> Ben


> P.S. You may have a bug in your calculations (depending on how you did your testing). In poll_func I think you should be updating the stats every time the function is called, not just the times when you return True.
I had tested the way you described at first, then later changed

> P.P.S. Command I used to test how slow it is to pack the data. One-time setup of creating the buffers, then multiple tests of the pack_into function:
> 
> python -m timeit -s "import struct;import ctypes;arr = [0]*1250001;buf = ctypes.create_string_buffer(10000000);fmt = \">1250000d\"" "struct.pack_into(fmt, buf, *arr)"
> 20 loops, best of 5: 15.3 msec per loop
  2828   05 Sep 2024 Stefan RittForumPython frontend rate limitations?
> First the general advice: if you reduce the "period" of your equipment, then your function will get called more frequently. 
> You can set it to 0 and we'll call it as often as possible. You can set this in the ODB at "/Equipment/Python Data Simulator/Common/Period"

Just for your general understanding: The "period" i the C framework works differently. It calls the poll function with a number, 
and then that number is used in the poll function like (simplified):

poll(INT count) {
   for (i=0 ; i<count ; i++)
      if (new_event())
         return TRUE;
   return FALSE;
}

This ensures that polling is done as quickly as possible, even staying in the same function (poll) rather than called from the 
framework in a loop (which would require a function call to poll each time). The "count" is determined from the framework
during startup of the framework such that the execution time of the poll() routine equals the "period". Like if the period 
is 0.1, the count might be a few millions, so that the poll routine returns immediately when a new event occurs or when
100ms have expired. During the polling the frontend is "dead" meaning it cannot react on run transitions for example. That's
why most experiments use 0.1-0.5 seconds. But this does then NOT mean that you can only have 10-2 events per second, but that
the reaction time if the frontend is at maximum 0.1-0.5 seconds which is acceptable most of the case. 

Due to this design, the C frontend is capable of producing millions of events per second. It took me some while in the early 1990's
to work out that scheme sitting in the "R" trailer at TRIUMF (old guys will remember...).

Best,
Stefan
  2829   06 Sep 2024 Jack CarltonForumPython frontend rate limitations?
Thanks for the responses, they were very helpful.

>First the general advice: if you reduce the "period" of your equipment, then your function will get called more frequently. You can set it to 0 and we'll 
call it as often as possible.

Thanks, this solves the event rate limitation I described. I didn't think to change this because the "period" did not affect the observed rate in C (and now 
I know why thanks to Stefan).

A couple more questions:

1. 
For me, 
python -m timeit -s "import struct;import ctypes;arr = [0]*1250001;buf = ctypes.create_string_buffer(10000000);fmt = \">1250000d\"" "struct.pack_into(fmt, 
buf, *arr)"
10 loops, best of 3: 43.7 msec per loop

which suggests my maximum data rate is about 1.25 MB * 1000/43.7 Hz = 23 MB/s (?). But I see data rates up to 60 MB/s with a python frontend. Am I 
misinterpreting the meaning of this result?


2. I can effectively bypass the rate limitations in python by running two concurrent frontends. For example, with one python frontend at best I can generate 
60 MB/s of data (setting "period" to 0 now); but with two frontends I can double this to 120 MB/s. This implies one python frontend is not bottlenecked by 
hardware limitations in my case.

Am I doing something wrong to artificially bottleneck my frontends? Perhaps there's a multi-threading solution I can implement to avoid needing multiple 
frontends?


Thanks,
Jack
  2831   11 Sep 2024 Konstantin OlchanskiForumPython frontend rate limitations?
> I'm trying to get a sense of the rate limitations of a python frontend.

1) python is single-threaded, for ultimate performance, a MIDAS frontend (or any DAQ 
application) has to be multithreaded:
a) thread with busy loop read the data and place it into a FIFO
b) thread to read data from FIFO and send it to SYSTEM buffer shared memory or to 
mserver
c) thread to respond to begin-run, end-run, etc RPCs
d) probably a thread to recycle memory from thread (b) back to thread (a) if per-event 
malloc()/free() adds too much overhead

2) data readout. C++ AXI bus access is compiled into 1 instruction and results in 1 AXI 
bus operation. comparable for python likely has much more overhead, slows you down.

3) event bank filling. C++ for() loop is compiled into very compact machine code, 
python loop cannot because each array element can be random data type, shows you down.

bottom line, there is a reason high speed data acquisitions are written in C/C++, not 
in shell, perl, tcl/tk, or (today's favourite) python.

> The C++ frontend is about 100 times faster in both data and event rates.

This is as expected. You can probably improve python code to get closer to 10 times 
slower than C++. But consider:

a) will it be "fast enough" for the task?
b) learning C++ and optimizing python to within "2-3-10x slower than C++" may involve a 
similar amount of time and effort.

And you have not looked at the real-time properties of your frontend. You may discover 
that it's actually faster than you think, but occasionally stops for a millisecond (or 
two or hundred). some applications a notorious for running memory garbage collection 
just at the wrong time.

I am working right now on exactly this problem, I have a 1 GHz ARM CPU (Cyclone-V FPGA) 
and I need to push data out at 100 Mbytes/sec while avoiding and bad-real-time dropouts  
that cause the FPGA data FIFO to overflow. And I only have 2 CPU cores, 1 to read the 
FPGA FIFO, 1 to run the TCP/IP stack and the ethernet driver. No this can be done with 
python.

K.O.
  2832   11 Sep 2024 Konstantin OlchanskiForumPython frontend rate limitations?
> 
> poll(INT count) {
>    for (i=0 ; i<count ; i++)
>       if (new_event())
>          return TRUE;
>    return FALSE;
> }

in the c++ frontend (tmfe.h) this loop usually runs in a separate thread, and I am now working on the linux magic to assign this thread maximum 
uninterruptible priority. otherwise on my Cyclone-V FPGA SoC I see 1-10 msec dropouts, I think from taking ethernet interrupts.

K.O.
  2833   11 Sep 2024 Konstantin OlchanskiForumPython frontend rate limitations?
> > I'm trying to get a sense of the rate limitations of a python frontend.

forgot one more:

c++ toolchain comes with extensive profiler tools aimed to answer the question "why is my 
program so slow, where is it spending all the time?". some of these tools go all the way to 
the hardware level and report CPU cache misses, TLB flushes, context switches and any other 
hardware events that interrupt or slow down computations. programmer than uses this 
information to restructure the code to avoid the worst slow downs (i.e. avoid branch mis-
predictions, avoid cache misses, etc).

I doubt the python toolchain will ever profiler tools as good.

K.O.
  2838   11 Sep 2024 Konstantin OlchanskiForum"Safe" abort of sequencer scripts
> We often use the MIDAS sequencer to temporarily control detector settings, such as:
> 
> * <change some setting>
> * WAIT 60 seconds
> * <revert setting to original value>
> 
> The question arises of what happens if the sequencer scripts gets aborted during that wait, preventing the value from being reset.

Common problem. Go have an elegant solution using the "defer" keyword.

https://go.dev/tour/flowcontrol/12

K.O.
  2860   27 Sep 2024 Ben SmithForumPython frontend rate limitations?
> in your case the limitation is likely that you're sending 1.25MB per event and we have a lot of data marshalling to do between the python and C++ layer.
> 
> I'll add it to my to-do list to investigate improving the performance of medium-to-large events in the python code.

I've now added better support for numpy arrays in the python code that encodes a `midas.event.Event` object. If you use the "correct" numpy data type then you can get vastly improved performance as numpy already stores the data in memory in the format that we need.

In your example, if you change
        self.zero_buffer = [0] * self.total_data_size
to 
        self.zero_buffer = np.ndarray(self.total_data_size, np.int16)

then the max data rate of the frontend goes from 330MB/s to 7600MB/s on my laptop (a factor 20 improvement from one line of code!) 

To ensure you're using the optimal numpy dtype for your bank, you can reference a dict called `midas.tid_np_formats`. For example `midas.tid_np_formats[midas.TID_SHORT]` is equivalent to `np.int16`. If you use an int16 array and write it as a TID_SHORT bank, then we'll use the fast path. If there is a mismatch, we'll have to do type conversions and will end up on the slow path.
  2885   05 Nov 2024 Jack CarltonForumHow to properly write a client listens for events on a given buffer?
If there's some template for writing a client to access event data, that would be 
very useful (and you can probably just ignore the context I gave below in that 
case).


Some context:

Quite a while ago, I wrote the attached "data pipeline" client whose job was to 
listen for events, copy their data, and pipe them to a python script. I believe I 
just stole bits and pieces from mdump.cxx to accomplish this. Later I wrote the 
attached wrapper class "MidasConnector.cpp" and a main.cpp to generalize
data_pipeline.cxx a bit. There were a lot of iterations to the code where I had the 
below problems; so don't take the logic in the attached code as the exact code that 
caused the issues below.

However, I'm unable to resolve a couple issues:

1. If a timeout is set, everything will work until that timeout is reached. Then 
regardless of what kind of logic I tried to implement (retry receiving event, 
disconnect and reconnect client, etc.) the client would refuse to receive more data.

2. When I ctrl-C main, it hangs; this is expected because it's stuck in a while 
loop. But because I can't set a timeout I have to ctrl-C twice; this would 
occasionally corrupt the ODB which was not ideal. I was able to get around this with 
some impractical solution involving ncurses I believe.


Thanks,
Jack
Attachment 1: data_pipeline_(2).cxx
#include "midas.h"
#include "msystem.h"
#include "mrpc.h"
#include "mdsupport.h"
#include <iostream>
#include <unistd.h>
#include <stdio.h>  // Added for popen
#include <stdlib.h> // Added for malloc and free

INT hBufEvent;

void process_event(EVENT_HEADER *pheader) {
    printf("Received event #%d\n", pheader->serial_number);
    printf("Event ID: %d\n", pheader->event_id);
    printf("Data Size: %d bytes\n", pheader->data_size);
    printf("Timestamp: %d\n", pheader->time_stamp);
    printf("Trigger mask: %d\n", pheader->trigger_mask);

    // Print a marker to indicate the start of serialized data
    printf("EVENT_DATA_START\n");

    // Serialize and print the event data
    int* eventData = (int*)((char*)pheader + sizeof(EVENT_HEADER));
    int numIntegers = (pheader->data_size - sizeof(EVENT_HEADER)) / sizeof(int);
    for (int i = 0; i < 8; ++i) {
        printf("%d ", eventData[i]);
    }
    printf("\n");

    // Process the event here
}

int main() {
    HNDLE hDB, hKey;
    char host_name[HOST_NAME_LENGTH], expt_name[NAME_LENGTH], str[80];
    char buf_name[32] = EVENT_BUFFER_NAME, rep_file[128];
    unsigned int status, start_time, stop_time;
    INT ch, request_id, size, get_flag, action, single, i;

    // Define the maximum event size you expect to receive
    INT max_event_size = 4000;

    // Allocate memory for storing event data dynamically
    void* event_data = malloc(max_event_size);

    printf("1\n");
    /* Get if existing the pre-defined experiment */
    cm_get_environment(host_name, sizeof(host_name), expt_name, sizeof(expt_name));
    // Print host_name
    printf("host_name = %s\n", host_name);

    // Print expt_name
    printf("expt_name = %s\n", expt_name);

    printf("2\n");
    /* connect to the experiment */
    status = cm_connect_experiment(host_name, expt_name, "data_pipeline", 0);
    if (status != CM_SUCCESS) {
        return 1;
    }
    printf("3\n");

    status = bm_open_buffer(buf_name, DEFAULT_BUFFER_SIZE, &hBufEvent);
    if (status != BM_SUCCESS && status != BM_CREATED) {
        cm_msg(MERROR, "data_pipeline", "Cannot open buffer \"%s\", bm_open_buffer() status %d", buf_name, status);
        return 1;
    }
    printf("4\n");
    /* set the buffer cache size if requested */
    bm_set_cache_size(hBufEvent, 100000, 0);
    printf("5\n");

    /* place a request for a specific event id */
    status = bm_request_event(hBufEvent, EVENTID_ALL, TRIGGER_ALL, GET_ALL, &request_id, NULL); // Use NULL as the callback routine
    printf("6\n");
    printf("status = %d\n",status);
    // Open a pipe to a Python script for data transfer
    FILE* pipe = popen("python3 data_pipeline.py", "w");
    if (pipe == NULL) {
        perror("popen");
        return 1;
    }

    // Enter the event processing loop
    while (1) {
        // Use the address of max_event_size in bm_receive_event
        status = bm_receive_event(hBufEvent, event_data, &max_event_size, BM_WAIT); // Wait for new data indefinitely

        if (status == BM_SUCCESS) {
            //process_event((EVENT_HEADER*)((char*)event_data + sizeof(EVENT_HEADER)));
            // Send the event data to the Python script via the pipe
            fprintf(pipe, "EVENT_DATA_START\n");
            int* eventData = (int*)((char*)event_data + sizeof(EVENT_HEADER));
            int numIntegers = (max_event_size - sizeof(EVENT_HEADER)) / sizeof(int);
            for (int i = 4; i < 12; ++i) {
                fprintf(pipe, "%d ", eventData[i]);
            }
            fprintf(pipe, "\n");
            fflush(pipe); // Flush the buffer to ensure data is sent immediately
        } else {
            printf("Error receiving event: %d\n", status);
            break; // Exit the loop if an error occurs
        }
    }

    // Close the pipe
    pclose(pipe);

    // Free the dynamically allocated memory
    free(event_data);

    cm_disconnect_experiment();

    printf("7\n");
    return 1;
}
Attachment 2: MidasConnector.cpp
#include "MidasConnector.h"

MidasConnector::MidasConnector(const char* clientName) {
    // Initialize client name
    strncpy(client_name_, clientName, NAME_LENGTH);

    // Get host name and experiment name from environment
    cm_get_environment(host_name_, sizeof(host_name_), experiment_name_, sizeof(experiment_name_));

    // Initialize other private variables if needed
    event_id = EVENTID_ALL;  // Initialize with default value
    trigger_mask = TRIGGER_ALL;  // Initialize with default value
    sampling_type = GET_ALL;  // Initialize with default value (renamed from get_flags)
    buffer_size = DEFAULT_BUFFER_SIZE;  // Initialize with default value
    timeout_millis = BM_WAIT;
    strncpy(buffer_name, EVENT_BUFFER_NAME, sizeof(buffer_name));  // Initialize with default value
}

// Getters for the private variables
short MidasConnector::getEventId() const {
    return event_id;
}

short MidasConnector::getTriggerMask() const {
    return trigger_mask;
}

int MidasConnector::getSamplingType() const {
    return sampling_type;  
}

int MidasConnector::getBufferSize() const {
    return buffer_size;
}

const char* MidasConnector::getBufferName() const {
    return buffer_name;
}

int MidasConnector::getTimeout() const {
    return timeout_millis;
}

HNDLE MidasConnector::getEventBufferHandle() const {
    return hBufEvent;
}

// Setters for the private variables
void MidasConnector::setEventId(short eventId) {
    event_id = eventId;
}

void MidasConnector::setTriggerMask(short triggerMask) {
    trigger_mask = triggerMask;
}

void MidasConnector::setSamplingType(int samplingType) {
    sampling_type = samplingType;  
}

void MidasConnector::setBufferSize(int bufferSize) {
    buffer_size = bufferSize;
}

void MidasConnector::setBufferName(const char* bufferName) {
    strncpy(buffer_name, bufferName, sizeof(buffer_name));
}

void MidasConnector::setTimeout(int timeoutMillis) {
    timeout_millis = timeoutMillis;
}

void MidasConnector::setEventBufferHandle(HNDLE eventBufferHandle) {
    hBufEvent = eventBufferHandle;
}


bool MidasConnector::ConnectToExperiment() {
    // Connect to the experiment
    int status = cm_connect_experiment(host_name_, experiment_name_, client_name_, NULL);
    if (status != CM_SUCCESS) {
        // Handle connection error
        return false;
    }
    return true;
}

void MidasConnector::DisconnectFromExperiment() {
    // Disconnect from the experiment
    cm_disconnect_experiment();
}

bool MidasConnector::OpenEventBuffer() {
    int status = bm_open_buffer(buffer_name, buffer_size, &hBufEvent);
    if (status != BM_SUCCESS && status != BM_CREATED) {
        cm_msg(MERROR, client_name_, "Cannot open buffer \"%s\", bm_open_buffer() status %d", buffer_name, status);
        return false;
    }
    return true;
}

bool MidasConnector::SetCacheSize(int cacheSize) {
    bm_set_cache_size(hBufEvent, cacheSize, 0);
    return true;
}

bool MidasConnector::RequestEvent() {
    int request_id;
    int status = bm_request_event(hBufEvent, event_id, trigger_mask, sampling_type, &request_id, NULL);
    return status == BM_SUCCESS;
}

bool MidasConnector::ReceiveEvent(void* eventBuffer, int& maxEventSize) {
    int status = bm_receive_event(hBufEvent, eventBuffer, &maxEventSize, timeout_millis);
    return status == BM_SUCCESS;
}

Attachment 3: main.cpp
#include "event_processor/EventProcessor.h"
#include "data_transmitter/DataTransmitter.h"
#include "midas_connector/MidasConnector.h"
#include "json.hpp"
#include <fstream>

INT hBufEvent1;
INT hBufEvent2;

// Function to initialize MIDAS and open an event buffer
bool initializeMidas(MidasConnector& midasConnector, const nlohmann::json& config) {
    // Set the MidasConnector properties based on the config
    midasConnector.setEventId(config["eventId"].get<short>());
    midasConnector.setTriggerMask(config["triggerMask"].get<short>());
    midasConnector.setSamplingType(config["samplingType"].get<int>());
    midasConnector.setBufferSize(config["bufferSize"].get<int>());
    midasConnector.setBufferName(config["bufferName"].get<std::string>().c_str());
    midasConnector.setBufferSize(config["bufferSize"].get<int>());

    // Call the ConnectToExperiment method
    if (!midasConnector.ConnectToExperiment()) {
        return false;
    }

    // Call the OpenEventBuffer method
    if (!midasConnector.OpenEventBuffer()) {
        return false;
    }

    // Set the buffer cache size if requested
    midasConnector.SetCacheSize(config["cacheSize"].get<int>());

    // Place a request for a specific event id
    if (!midasConnector.RequestEvent()) {
        return false;
    }

    return true;
}

int main() {
    // Read configuration from a JSON file
    nlohmann::json config;
    std::ifstream configFile("config.json");
    configFile >> config;
    configFile.close();

    // Initialize MidasConnector and connect to the MIDAS experiment
    MidasConnector midasConnector(config["clientName"].get<std::string>().c_str());
    if (!initializeMidas(midasConnector, config)) {
        printf("Error: Failed to initialize MIDAS.\n");
        return 1;
    }

    // Read the maximum event size from the JSON configuration
    INT max_event_size = config["maxEventSize"].get<int>();

    // Allocate memory for storing event data dynamically
    void* event_data = malloc(max_event_size);

    // Initialize EventProcessor with detector mapping file and verbosity flag
    EventProcessor eventProcessor(config["detectorMappingFile"].get<std::string>(), config["verbose"].get<bool>());

    // Initialize DataTransmitter with the ZeroMQ address
    DataTransmitter dataPublisher(config["zmqAddress"].get<std::string>());

    // Connect to the ZeroMQ server
    if (!dataPublisher.bind()) {
        // Handle connection error
        printf("Error: Failed to bind to port %s.\n", config["zmqAddress"].get<std::string>().c_str());
        return 1;
    } else {
        printf("Connected to the ZeroMQ server.\n");
    }


    // Event processing loop
    while (true) {

        midasConnector.ReceiveEvent(event_data, max_event_size);

        //Prcoess data once we have it
        eventProcessor.processEvent(event_data, max_event_size);

        // Serialize the event data with EventProcessor and store it in serializedData
        std::string serializedData = eventProcessor.getSerializedData();

        // Send the serialized data to the ZeroMQ server with DataTransmitter
        if (!dataPublisher.publish(serializedData)) {
            // Handle send error
            printf("Error: Failed to send serialized data.\n");
        }
    }

    // Cleanup and finalize your application
    midasConnector.DisconnectFromExperiment(); // Disconnect from the MIDAS experiment

    return 0;
}
  2886   05 Nov 2024 Maia Henriksson-WardForumHow to properly write a client listens for events on a given buffer?
> If there's some template for writing a client to access event data, that would be 
> very useful (and you can probably just ignore the context I gave below in that 
> case).
> 
> 
> Some context:
> 
> Quite a while ago, I wrote the attached "data pipeline" client whose job was to 
> listen for events, copy their data, and pipe them to a python script. I believe I 
> just stole bits and pieces from mdump.cxx to accomplish this. Later I wrote the 
> attached wrapper class "MidasConnector.cpp" and a main.cpp to generalize
> data_pipeline.cxx a bit. There were a lot of iterations to the code where I had the 
> below problems; so don't take the logic in the attached code as the exact code that 
> caused the issues below.
> 
> However, I'm unable to resolve a couple issues:
> 
> 1. If a timeout is set, everything will work until that timeout is reached. Then 
> regardless of what kind of logic I tried to implement (retry receiving event, 
> disconnect and reconnect client, etc.) the client would refuse to receive more data.
> 
> 2. When I ctrl-C main, it hangs; this is expected because it's stuck in a while 
> loop. But because I can't set a timeout I have to ctrl-C twice; this would 
> occasionally corrupt the ODB which was not ideal. I was able to get around this with 
> some impractical solution involving ncurses I believe.
> 
> 
> Thanks,
> Jack

midas/examples/lowlevel/consume.cxx might be what you're looking for, but I think all 
you're missing is a call to cm_yield() in your loop, so your midas client doesn't get 
killed when the timeout is reached (and also so you can act on shutdown requests from 
midas)

Something like 
      int status = cm_yield(100);
      if (status == SS_ABORT || status == RPC_SHUTDOWN)
         break;

There might be a recommended way to handle the ctrl-c and disconnect from the ODB, but 
off the top of my head I don't remember it. 

Also check out Ben's new(ish) python library, midas/python/examples/event_receiver.py 
might be a much easier solution. And you can use the context manager, which will take 
care of safely disconnecting from midas after you ctrl-C.
  2914   02 Dec 2024 Stefan RittForum"Safe" abort of sequencer scripts
The atexit() function has been implemented in the current develop branch of midas, see

  https://daq00.triumf.ca/MidasWiki/index.php/Sequencer#ATEXIT_subroutine


Stefan
  2926   29 Dec 2024 Pavel MuratForumtime ordering of run transition calls to TMFeEquipment things
Dear MIDAS experts, 

I have a question about "tmfe approach" to implementing MIDAS frontends. If I read the code correctly, 
within this approach it is the TMFeEquipment things, not the TMFrontend's themselves, 
which handle the run transitions - the TMFrontend class 

https://bitbucket.org/tmidas/midas/src/423082fb67c7711813fcda61f7cd03784c398f49/include/tmfe.h#lines-306:378

simply doesn't have methods to handle those directly. 

So how does a user control the sequence in which TMFeEquipment::HandleBeginRun functions of different 
TMFeEquipment pieces are called at begin run? - there are two cases to consider: TMFeEquipment things 
defined by the same TMFrontend and by different TMFrontend's.

Many thanks and happy holidays to everyone! 

-- regards, Pasha
 
  2927   01 Jan 2025 Konstantin OlchanskiForumtime ordering of run transition calls to TMFeEquipment things
> I have a question about "tmfe approach" to implementing MIDAS frontends. If I read the code correctly, 
> within this approach it is the TMFeEquipment things, not the TMFrontend's themselves, 
> which handle the run transitions - the TMFrontend class

that's correct and it is documented so in https://bitbucket.org/tmidas/midas/src/develop/tmfe.md

> So how does a user control the sequence in which TMFeEquipment::HandleBeginRun functions of different 
> TMFeEquipment pieces are called at begin run? - there are two cases to consider: TMFeEquipment things 
> defined by the same TMFrontend and by different TMFrontend's.

I am not sure what you are trying to do. It is always easier to suggest a solution to a specific problem.

But I will try to answer anyway:

1) "time ordering of run transitions" - of course midas transitions are ordered by transition sequence numbers 
and the tmfe class provides methods to control this. ditto for the mfe.cxx frontends.

2) for one TMFrontend, the order of calling HandleBeginRun() is the order in which equipments were added to the 
equipment using FeAddEquipment(). HandleEndRun() is called in reverse order. (I better check this).

3) to have multiple TMFrontends in one program would be unusual (mfe.cxx frontends completely do not support 
this), but should work. Everything was coded to support this, but it was never tested in practice because we 
cannot invent any useful use-case for it. HandleBeginRun() handlers are likely to be called in the frontends are 
created. (I could check this and confirm it works, as long as you have a valid use-case for this configuration).

4) Frontend X has EquipmentA and EquipmentB, you want EqA::HandleBeginRun() to be called at run transition 200 
and EqB::HandleBeginRun() to be called at run transition 400.

This is not directly supported by mfe.cxx frontends (the begin_run() handler is a global function) and I did not 
directly implement it in the TMFE frontend.

But I think this would be a useful improvement. I will look into this.

Likely I will add per-equipment data members fEqConfBeginRunSeqNo, fEqConfEndRunSeqno, etc. Value 0 would 
unregister the corresponding run transition handler. This would cleanup the code quite a bit, a bunch
of RegisterTranstionXXX functions could go away.

K.O.
ELOG V3.1.4-2e1708b5