Back Midas Rome Roody Rootana
  Midas DAQ System  Not logged in ELOG logo
Entry  05 Sep 2024, Jack Carlton, Forum, Python frontend rate limitations? frontend.pyfrontend.cxx
    Reply  05 Sep 2024, Ben Smith, Forum, Python frontend rate limitations? 
       Reply  05 Sep 2024, Stefan Ritt, Forum, Python frontend rate limitations? 
          Reply  06 Sep 2024, Jack Carlton, Forum, Python frontend rate limitations? 
          Reply  11 Sep 2024, Konstantin Olchanski, Forum, Python frontend rate limitations? 
    Reply  11 Sep 2024, Konstantin Olchanski, Forum, Python frontend rate limitations? 
       Reply  11 Sep 2024, Konstantin Olchanski, Forum, Python frontend rate limitations? 
Message ID: 2826     Entry time: 05 Sep 2024     In reply to: 2825     Reply to this: 2827   2828
Author: Ben Smith 
Topic: Forum 
Subject: Python frontend rate limitations? 
> What limits the rate that poll_func is called in a python frontend? 

First the general advice: if you reduce the "period" of your equipment, then your function will get called more frequently. You can set it to 0 and we'll call it as often as possible. You can set this in the ODB at "/Equipment/Python Data Simulator/Common/Period"

If that's still not fast enough, then you can return a *list* of events from your readout_func. I've seen real-world cases of 25kHz+ of midas events generated in this fashion.


However in your case the limitation is likely that you're sending 1.25MB per event and we have a lot of data marshalling to do between the python and C++ layer. In particular it takes 15ms on my machine to just pack the data into a memory buffer (see timeit command below). I am sure there must be a faster way to do this packing, especially in the case where the bank contains a numpy array rather than a python list.

I'll add it to my to-do list to investigate improving the performance of medium-to-large events in the python code.


Cheers,
Ben


P.S. You may have a bug in your calculations (depending on how you did your testing). In poll_func I think you should be updating the stats every time the function is called, not just the times when you return True.


P.P.S. Command I used to test how slow it is to pack the data. One-time setup of creating the buffers, then multiple tests of the pack_into function:

python -m timeit -s "import struct;import ctypes;arr = [0]*1250001;buf = ctypes.create_string_buffer(10000000);fmt = \">1250000d\"" "struct.pack_into(fmt, buf, *arr)"
20 loops, best of 5: 15.3 msec per loop
ELOG V3.1.4-2e1708b5