Back Midas Rome Roody Rootana
  Midas DAQ System  Not logged in ELOG logo
Entry  05 Sep 2024, Jack Carlton, Forum, Python frontend rate limitations? frontend.pyfrontend.cxx
    Reply  05 Sep 2024, Ben Smith, Forum, Python frontend rate limitations? 
       Reply  05 Sep 2024, Stefan Ritt, Forum, Python frontend rate limitations? 
          Reply  06 Sep 2024, Jack Carlton, Forum, Python frontend rate limitations? 
          Reply  11 Sep 2024, Konstantin Olchanski, Forum, Python frontend rate limitations? 
    Reply  11 Sep 2024, Konstantin Olchanski, Forum, Python frontend rate limitations? 
       Reply  11 Sep 2024, Konstantin Olchanski, Forum, Python frontend rate limitations? 
This is a draft message, edit and submit it to make it permanent  
Message ID: 2827     Entry time: 05 Sep 2024     In reply to: 2826
Author: Jack Carlton 
Topic: Forum 
Subject: Python frontend rate limitations? 
Thank you, this was very helpful.

> First the general advice: if you reduce the "period" of your equipment, then your function will get called more frequently. You can set it to 0 and we'll call it as often as possible. You can set this in the ODB at "/Equipment/Python Data Simulator/Common/Period"

Thanks, I thought that was just for periodic triggering (or at least that's how I've used it in C++ frontends). Changing this allowed me to get past the 100Hz event rate cap I described.

> If that's still not fast enough, then you can return a *list* of events from your readout_func. I've seen real-world cases of 25kHz+ of midas events generated in this fashion.
> 
> 
> However in your case the limitation is likely that you're sending 1.25MB per event and we have a lot of data marshalling to do between the python and C++ layer. In particular it takes 15ms on my machine to just pack the data into a memory buffer (see timeit command below). I am sure there must be a faster way to do this packing, especially in the case where the bank contains a numpy array rather than a python list.
> 
> I'll add it to my to-do list to investigate improving the performance of medium-to-large events in the python code.
> 
> 
> Cheers,
> Ben


> P.S. You may have a bug in your calculations (depending on how you did your testing). In poll_func I think you should be updating the stats every time the function is called, not just the times when you return True.
I had tested the way you described at first, then later changed

> P.P.S. Command I used to test how slow it is to pack the data. One-time setup of creating the buffers, then multiple tests of the pack_into function:
> 
> python -m timeit -s "import struct;import ctypes;arr = [0]*1250001;buf = ctypes.create_string_buffer(10000000);fmt = \">1250000d\"" "struct.pack_into(fmt, buf, *arr)"
> 20 loops, best of 5: 15.3 msec per loop
ELOG V3.1.4-2e1708b5