Back Midas Rome Roody Rootana
  Midas DAQ System, Page 4 of 145  Not logged in ELOG logo
ID Date Author Topic Subject
  2863   07 Oct 2024 Konstantin OlchanskiBug ReportDifficulty running MIDAS on Rocky 9.4
> We're trying to install the SuperCDMS version of MIDAS on a Rocky 9.4 Virtual 
> Machine and are getting a persistent error when we run mserver.
>
> [mserver,ERROR] [odb.cxx:2498:db_lock_database,ERROR] cannot lock ODB semaphore, 
> timeout 10000 ms, exiting...
> db_lock_database: Detected recursive call to db_{lock,unlock}_database() while 
> already inside db_{lock,unlock}_database(). Maybe this is a call from a signal 
> handler. Cannot continue, aborting...
> Aborted (core dumped)

This is super very bad. Since you have a core dump, please post the stack trace here (or email it to me).

I probably cannot debug your private version of midas and I will recommend that you install and run vanilla midas 
mserver (just while we debug this problem).

Let's look at the core dump stack trace first, but likely we see a problem with System-V semaphores and hopefully it 
is not some breakage due to Red Hat bogosity or due to something specific to running on a virtual machine.

If indeed this is Linux-kernel level breakage of System-V semaphores, solution would be to start using Posix 
semaphores, something I wanted to do for a long time. We already switched MIDAS shared memory from System-V to Posix 
shared memory.

If we are lucky it is just one more crasher bug in ODB. Let's see that core dump stack trace.

K.O.
  2862   07 Oct 2024 Ben SmithBug ReportDifficulty running MIDAS on Rocky 9.4
> We're trying to install the SuperCDMS version of MIDAS on a Rocky 9.4 Virtual 
> Machine and are getting a persistent error when we run mserver.  As far as I 
> know there are minimal changes between this and the MIDAS branch, but Ben Smith 
> may have more to say on this.

For reference, "the SuperCDMS version of MIDAS" is just a fork that no longer has any meaningful differences vs the main MIDAS repo, but we only pull updates infrequently after testing a bunch. We last pulled from the develop branch in November 2023. But that should be irrelevant here as semaphore code hasn't been touched for a very long time.

We're running Alma 9.4 on a machine at TRIUMF and the same version of midas works fine there (Amy, you may already have access to scdms-zeus). I believe Alma and Rocky should be basically identical for this.

So the questions are:
* Have you tried other midas programs, or only mserver? E.g. did odbedit and mhttpd work?
* If other programs work, have you been running them all as the same user? In particular, if you ran one program as root and another as an unprivileged user, then you will likely get odd permissions issues.
* What do you see if you run `ls -l /dev/shm` and `ls -l ~/packages/SuperCDMS_DAQ/MidasDAQ/online/.*SHM`? (Or wherever your online dir is for the 2nd one).
* Did you follow the full instructions for recovering from a corrupt ODB? https://daq00.triumf.ca/MidasWiki/index.php/FAQ#How_to_recover_from_a_corrupted_ODB   In particular the bit about running odbinit with the --cleanup flag?
  2861   07 Oct 2024 Amy RobertsBug ReportDifficulty running MIDAS on Rocky 9.4
We're trying to install the SuperCDMS version of MIDAS on a Rocky 9.4 Virtual 
Machine and are getting a persistent error when we run mserver.  As far as I 
know there are minimal changes between this and the MIDAS branch, but Ben Smith 
may have more to say on this.

[lekhraj@sdfcdmsdaq online]$ mserver
mserver started interactively
[mserver,INFO] Client 'ODBEdit' on buffer 'SYSMSG' removed by bm_open_buffer 
because process pid 481051 does not exist
mserver will listen on TCP port 1175
[mserver,ERROR] [odb.cxx:2498:db_lock_database,ERROR] cannot lock ODB semaphore, 
timeout 10000 ms, exiting...
[mserver,ERROR] [midas.cxx:2205:cm_check_connect,ERROR] cm_disconnect_experiment 
not called at end of program
db_lock_database: Detected recursive call to db_{lock,unlock}_database() while 
already inside db_{lock,unlock}_database(). Maybe this is a call from a signal 
handler. Cannot continue, aborting...
Aborted (core dumped)

We thought perhaps we had a corrupted ODB file, so we removed the ODB file and 
tried to create a new one (sized correctly for our experiment):

[lekhraj@sdfcdmsdaq online]$ odbedit -s 50000000
[ODBEdit,ERROR] [odb.cxx:2052:db_open_database,ERROR] Removed ODB client 
'mserver', index 0 because process pid 481326 does not exists
[ODBEdit,INFO] Removed open record flag from "/Experiment/Security/RPC 
hosts/Allowed hosts"
[ODBEdit,INFO] Removed exclusive access mode from "/Experiment/Security/RPC 
hosts/Allowed hosts"
[ODBEdit,INFO] Corrected 1 ODB entries
[ODBEdit,INFO] Deleted entry '/System/Clients/481326' for client 'mserver' 
because it is not connected to ODB
[ODBEdit,INFO] Client 'mserver' on buffer 'SYSMSG' removed by bm_open_buffer 
because process pid 481326 does not exist
[local:test:S]/>Bus error (core dumped)
  2860   27 Sep 2024 Ben SmithForumPython frontend rate limitations?
> in your case the limitation is likely that you're sending 1.25MB per event and we have a lot of data marshalling to do between the python and C++ layer.
> 
> I'll add it to my to-do list to investigate improving the performance of medium-to-large events in the python code.

I've now added better support for numpy arrays in the python code that encodes a `midas.event.Event` object. If you use the "correct" numpy data type then you can get vastly improved performance as numpy already stores the data in memory in the format that we need.

In your example, if you change
        self.zero_buffer = [0] * self.total_data_size
to 
        self.zero_buffer = np.ndarray(self.total_data_size, np.int16)

then the max data rate of the frontend goes from 330MB/s to 7600MB/s on my laptop (a factor 20 improvement from one line of code!) 

To ensure you're using the optimal numpy dtype for your bank, you can reference a dict called `midas.tid_np_formats`. For example `midas.tid_np_formats[midas.TID_SHORT]` is equivalent to `np.int16`. If you use an int16 array and write it as a TID_SHORT bank, then we'll use the fast path. If there is a mismatch, we'll have to do type conversions and will end up on the slow path.
  2859   24 Sep 2024 Konstantin OlchanskiSuggestionClean up compiler warning in manalyzer
> > I like the look of std::format, looks cleaner than string streams
> 
> I fully agree. String streams is a pain if you want to do zero-leading hex output mixed with decimal output. Yes it's easier to read if you don't know printf syntax,
> but 10-20 times more chars to write and not necessarily cleaner.
>

IMO c++ string streams formatting is optimized for "hello world" and is useless for printing hex numbers, table-formatted data and generally anything real-life.

plus the borked std::to_string() (it takes a global lock for the "C" locale), "fixed" it by introducing std::to_chars() in C++17,
with "ultimate fix" in std::format in C++26.

no question why C++ has the bad reputation. for a "done right" example, take a look at the Go standard library.

> 
> Probable is that we would have to convert about a few thousand of sprintf's() in midas.
> 

surprising few bare sprintf() remaining in MIDAS, most of them overflow-safe and most of them to be converted to msprintf().

K.O.
  2858   24 Sep 2024 Konstantin OlchanskiBug ReportCan we convert the .mid file into .root file
"Can we convert the .mid file into .root file".

yes, you can, but the operation is under-defined. it's like asking "can I convert these stones into houses". the answer is "yes", but it involves 
more than running a universal conversion program.

For this reason, I recommend against converting midas files "to root". for some types of midas data such a conversion makes no sense (i.e. alpha-g 
streamed udp packets with chopped compressed waveforms).

I recommend that you analyze you data in the midas analyzer. You can start with manalyzer_example_root.cxx,
it shows how to create a ROOT histogram, how to access midas event bank data and call the TH1 "Fill" method.

Instead of filling histograms in the analyzer, you can create a ROOT TTree and fill it with data from midas data banks,
effectively you will create your own custom converter from midas to root.

The key thing is that it has to be a custom converter, because only you know the meaning of midas bank data
and how it should be best stored in a root tree.

K.O.
  2857   24 Sep 2024 Stefan RittInfoNews MSCB++ API
> Where is the example of error handling?

#include "mscbxx.h"
#include "mexcept.h"

...
   try {
   
      // connect to node 10 at submaster mscb123
      midas::mscb m("mscb123", 10);

      // print a variable
      std::cout << m["Input0"] << std::endl;
   
   } catch (mexception e) {
      std::cout << e << std::endl; // simply print exception
   }
...
  2856   22 Sep 2024 Tam Kai ChungBug ReportCan we convert the .mid file into .root file
Dear experts, 
I am a new user of MIDAS. I have just created some banks by a frontend.cxx code.
Now, I would like to do some analysis from the data.

I have an analyzer.cxx code (A very simple one without complicated routine).

I try to link the analyzer.o with rmana.o and libmidas.a to create analyzer.exe

I am not sure whether I can do the analysis offline in the follow way:

analyzer.exe -i run00001.mid -o run00001.root

When I run this command,  I get the following error:

Error in <TClass::LoadClassInfo>: no interpreter information for class TSocket is available even though it has a TClass initialization routine.

I am using root 6.30

Any suggestion about this issue? Thank you.

Best,
Terry
  2855   20 Sep 2024 Stefan RittSuggestionClean up compiler warning in manalyzer
> I like the look of std::format, looks cleaner than string streams

I fully agree. String streams is a pain if you want to do zero-leading hex output mixed with decimal output. Yes it's easier to read if you don't know printf syntax,
but 10-20 times more chars to write and not necessarily cleaner.

Proble is that we would have to convert about a few thousand of sprintf's() in midas.

Stefan
  2854   20 Sep 2024 Joseph McKennaSuggestionClean up compiler warning in manalyzer
> > This is a super small pull request, simple replace deprecated sprintf with snprintf
> > https://bitbucket.org/tmidas/manalyzer/pull-requests/9
> 
> sprintf() is not deprecated and "char buf[256]; sprintf(buf, "%05d", 64-bit-int);" is safe, will never overflow.
> 
> we could bulk-convert all these sprintf() to snprintf() but I would rather wait for this:
> 
> https://en.cppreference.com/w/cpp/utility/format/format
> 
> let me think on this for a bit.
> 
> K.O.

I completely agree that the 64-bit int is safe and will never overflow. Doing a little digging, both clang and gcc don't raise warnings on x86_64 (even with -Wall -Wextra -Wpedantic), even when I give it a buffer impossibly small (two bytes). However I've narrowed down the depreciation warning comes from: MacOS

https://developer.apple.com/documentation/kernel/1441083-sprintf

I like the look of std::format, looks cleaner than string streams
  2853   20 Sep 2024 Stefan RittBug ReportCrash using ODB watch
The problem has been fixed in the current version. Here is my analysis:

- the midas::odb object *can* go out of scope in the function, since the odb::watch() function creates a deep copy of the object. 
This does not cause a memory leak if one call odb::unwatch_all() at the end of a program.

- The creation from XML had a flaw where the ODB key handle ("hKey") is not initialized since it is not passed by the db_copy_xml() function.
I added code to db_copy_xml() to also fetch the key handle in the XML file, which now fixes the issue. Please note that you have to
update both the server and client side of midas to get this functionality if you are using it by a remote client.

- I saw the flag MK added on his pull request to the constructor of odb::odb(). This is a way to fight the symptoms (by creating an
object the "old" way if not otherwise needed, but how we have the cause cured. Nevertheless I added that parameter, but set to to true by default:

   odb::odb(const std::string &str, bool init_via_xml = true);

since this should be fully working now and should always be faster than the old method. I only keep it for debugging should we observe
another flaw in odb_from_xml(). 

Best regards,
Stefan
  2852   18 Sep 2024 Marius KoeppelBug ReportCrash using ODB watch
I created a PR to fix this issue https://bitbucket.org/tmidas/midas/pull-requests/42.
The crash happened since the change in commit 3ad98c5 always got the ODB via XML.
However, the creation from XML should only be used when a user wants to read fast (and when we are on a remote machine) so I added the flag use_from_xml to explicitly specify this.


> > {
> > odb new_settings("/Equipment/Test FE/Settings");
> > new_settings.watch(watch); // <-- here I am getting a segmentation fault
> > }
> 
> this code has a bug. "watch" is attached to object "new_settings" that is deleted
> after the closing curly bracket.

> I would say Stefan's odb API should not allow you to write code like this. an API defect.

As pointed out in the thread this feature is explicitly supported by odbxx.cxx:

void odb::watch(std::function<void(midas::odb &)> f) {
      if (m_hKey == 0 || m_hKey == -1)
         mthrow("watch() called for ODB key \"" + m_name +
                "\" which is not connected to ODB");

      // create a deep copy of current object in case it
      // goes out of scope
      midas::odb* ow = new midas::odb(*this);

      ow->m_watch_callback = f;
      db_watch(s_hDB, m_hKey, midas::odb::watch_callback, ow);

      // put object into watchlist
      g_watchlist.push_back(ow);
}

Also in the old way (see for example https://bitbucket.org/tmidas/midas/src/191d13f98626fae533cbca17b00df7ee361edf16/examples/crfe/crfe.cxx#lines-126) it was possible to create a watch in a scope without the user taking care that the "object" does not go out of scope.
I think this feature should be supported by the framework.

Best,
Marius
  2851   17 Sep 2024 Konstantin OlchanskiBug ReportCrash using ODB watch
> {
> odb new_settings("/Equipment/Test FE/Settings");
> new_settings.watch(watch); // <-- here I am getting a segmentation fault
> }

this code has a bug. "watch" is attached to object "new_settings" that is deleted
after the closing curly bracket.

I would say Stefan's odb API should not allow you to write code like this. an API defect.

K.O.
  2850   16 Sep 2024 Mark GrimesBug ReportCrash using ODB watch
Hi,
Maybe I've misunderstood the code, but odb::watch() creates a deep copy of itself to set the watch to.  The comment where this happens specifies that this is in case the current one goes out of scope.  See https://bitbucket.org/tmidas/midas/src/2878647fb73648474b35223ce53a125180f751b3/src/odbxx.cxx#lines-1393:1395
So as far as I can tell allowing the current odb instance to go out of scope is supported.

Thanks,

Mark.


> Okay, but this is then a big issue IMO. For Mu3e we do this in every frontend and I also checked again all of these watches are broken at the moment (with commit 3ad98c5 they worked).
>  
> In the old style we did for example (see https://bitbucket.org/tmidas/midas/src/develop/examples/crfe/crfe.cxx):
> 
> INT frontend_init()
> {
>    HNDLE hKey;
> 
>    // create Settings structure in ODB
>    db_create_record(hDB, 0, "Equipment/Clock Reset/Settings", strcomb1(cr_settings_str).c_str());
>    db_find_key(hDB, 0, "/Equipment/Clock Reset", &hKey);
>    assert(hKey);
> 
>    db_watch(hDB, hKey, cr_settings_changed, NULL);
> 
>    /*
>     * Set our transition sequence. The default is 500. Setting it
>     * to 600 means we are called AFTER most other clients.
>     */
>    cm_set_transition_sequence(TR_START, 600);
> 
>    return CM_SUCCESS;
> }
> 
> I thought this will be the same (under the hood) in the current odbxx way via:
> 
> odb settings("Equipment/Clock Reset/Settings");
> settings.watch(cr_settings_changed);
> 
> Best,
> Marius
> 
> 
> > Well, the object *went* out of scope. For my code it‘s hard to realize this, so the error reporting is poor. Also the first object should have the same
> > problem. Just by accident that it does not crash.
> > 
> > Stefan 
> > 
> > > This is not the case here. Note that the error message: "Callback received for a midas::odb object which went out of scope" is not called! The segmentation fault happens later line 96.
> > > 
> > > > The answer is in the error message: „Object went out of scope“. When your frontent_init() exits, the odb objects are destroyed. When you get a callback, it‘s linked to the
> > > > destroyed object. This is like if you have a local string and pass a reference to that string in the return of the function.
> > > > 
> > > > Use a global object (bad) or use „new“ (potential memory leak). I would use a global structure which holds all odb objects.
> > > > 
> > > > Stefan
> > > >  
> > > > > 
> > > > > last week I was running MIDAS with the commit 3ad98c5. Today I updated MIDAS and now all my watch functions are crashing. Attached I have a minimal example frontend of the problem.
> > > > > 
> > > > > In our software we have two functions one which sets up the ODB values of the frontend and another one which sets up all watch functions. So overall we connect two time to the ODB during fronend_init one time to create the values and one time to create the watch. In the example code a simple version of this setup is shown:
> > > > > 
> > > > > INT frontend_init() {
> > > > > 
> > > > >   cm_msg(MINFO, "frontend_init() setup", "Test FE");
> > > > > 
> > > > >   odb settings = {
> > > > >     {"Test", 123},
> > > > >     {"sub", {}}
> > > > >   };
> > > > >   settings.connect_and_fix_structure("/Equipment/Test FE/Settings");
> > > > >   // settings.watch(watch); <-- this works without segmentation fault
> > > > > 
> > > > >   odb new_settings("/Equipment/Test FE/Settings");
> > > > >   new_settings.watch(watch); // <-- here I am getting a segmentation fault
> > > > > 
> > > > >   return CM_SUCCESS;
> > > > > }
> > > > > 
> > > > > When I directly set the watch everything runs fine however, when I create a new ODB object and use this one to set a watch I am getting the following segmentation fault:
> > > > > 
> > > > > Process 18474 stopped
> > > > > * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x34)
> > > > >     frame #0: 0x000000010004fa38 test_fe`midas::odb::watch_callback(hDB=<unavailable>, hKey=<unavailable>, index=0, info=0x00006000002001c0) at odbxx.cxx:96:25 [opt]
> > > > >    93  	      if (po->m_data == nullptr)
> > > > >    94  	         mthrow("Callback received for a midas::odb object which went out of scope");
> > > > >    95  	      midas::odb *poh = search_hkey(po, hKey);
> > > > > -> 96  	      poh->m_last_index = index;
> > > > >    97  	      po->m_watch_callback(*poh);
> > > > >    98  	      poh->m_last_index = -1;
> > > > >    99  	   }
> > > > > 
> > > > > Best,
> > > > > Marius
  2849   16 Sep 2024 Marius KoeppelBug ReportCrash using ODB watch
Okay, but this is then a big issue IMO. For Mu3e we do this in every frontend and I also checked again all of these watches are broken at the moment (with commit 3ad98c5 they worked).
 
In the old style we did for example (see https://bitbucket.org/tmidas/midas/src/develop/examples/crfe/crfe.cxx):

INT frontend_init()
{
   HNDLE hKey;

   // create Settings structure in ODB
   db_create_record(hDB, 0, "Equipment/Clock Reset/Settings", strcomb1(cr_settings_str).c_str());
   db_find_key(hDB, 0, "/Equipment/Clock Reset", &hKey);
   assert(hKey);

   db_watch(hDB, hKey, cr_settings_changed, NULL);

   /*
    * Set our transition sequence. The default is 500. Setting it
    * to 600 means we are called AFTER most other clients.
    */
   cm_set_transition_sequence(TR_START, 600);

   return CM_SUCCESS;
}

I thought this will be the same (under the hood) in the current odbxx way via:

odb settings("Equipment/Clock Reset/Settings");
settings.watch(cr_settings_changed);

Best,
Marius


> Well, the object *went* out of scope. For my code it‘s hard to realize this, so the error reporting is poor. Also the first object should have the same
> problem. Just by accident that it does not crash.
> 
> Stefan 
> 
> > This is not the case here. Note that the error message: "Callback received for a midas::odb object which went out of scope" is not called! The segmentation fault happens later line 96.
> > 
> > > The answer is in the error message: „Object went out of scope“. When your frontent_init() exits, the odb objects are destroyed. When you get a callback, it‘s linked to the
> > > destroyed object. This is like if you have a local string and pass a reference to that string in the return of the function.
> > > 
> > > Use a global object (bad) or use „new“ (potential memory leak). I would use a global structure which holds all odb objects.
> > > 
> > > Stefan
> > >  
> > > > 
> > > > last week I was running MIDAS with the commit 3ad98c5. Today I updated MIDAS and now all my watch functions are crashing. Attached I have a minimal example frontend of the problem.
> > > > 
> > > > In our software we have two functions one which sets up the ODB values of the frontend and another one which sets up all watch functions. So overall we connect two time to the ODB during fronend_init one time to create the values and one time to create the watch. In the example code a simple version of this setup is shown:
> > > > 
> > > > INT frontend_init() {
> > > > 
> > > >   cm_msg(MINFO, "frontend_init() setup", "Test FE");
> > > > 
> > > >   odb settings = {
> > > >     {"Test", 123},
> > > >     {"sub", {}}
> > > >   };
> > > >   settings.connect_and_fix_structure("/Equipment/Test FE/Settings");
> > > >   // settings.watch(watch); <-- this works without segmentation fault
> > > > 
> > > >   odb new_settings("/Equipment/Test FE/Settings");
> > > >   new_settings.watch(watch); // <-- here I am getting a segmentation fault
> > > > 
> > > >   return CM_SUCCESS;
> > > > }
> > > > 
> > > > When I directly set the watch everything runs fine however, when I create a new ODB object and use this one to set a watch I am getting the following segmentation fault:
> > > > 
> > > > Process 18474 stopped
> > > > * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x34)
> > > >     frame #0: 0x000000010004fa38 test_fe`midas::odb::watch_callback(hDB=<unavailable>, hKey=<unavailable>, index=0, info=0x00006000002001c0) at odbxx.cxx:96:25 [opt]
> > > >    93  	      if (po->m_data == nullptr)
> > > >    94  	         mthrow("Callback received for a midas::odb object which went out of scope");
> > > >    95  	      midas::odb *poh = search_hkey(po, hKey);
> > > > -> 96  	      poh->m_last_index = index;
> > > >    97  	      po->m_watch_callback(*poh);
> > > >    98  	      poh->m_last_index = -1;
> > > >    99  	   }
> > > > 
> > > > Best,
> > > > Marius
  2848   16 Sep 2024 Stefan RittBug ReportCrash using ODB watch
Well, the object *went* out of scope. For my code it‘s hard to realize this, so the error reporting is poor. Also the first object should have the same
problem. Just by accident that it does not crash.

Stefan 

> This is not the case here. Note that the error message: "Callback received for a midas::odb object which went out of scope" is not called! The segmentation fault happens later line 96.
> 
> > The answer is in the error message: „Object went out of scope“. When your frontent_init() exits, the odb objects are destroyed. When you get a callback, it‘s linked to the
> > destroyed object. This is like if you have a local string and pass a reference to that string in the return of the function.
> > 
> > Use a global object (bad) or use „new“ (potential memory leak). I would use a global structure which holds all odb objects.
> > 
> > Stefan
> >  
> > > 
> > > last week I was running MIDAS with the commit 3ad98c5. Today I updated MIDAS and now all my watch functions are crashing. Attached I have a minimal example frontend of the problem.
> > > 
> > > In our software we have two functions one which sets up the ODB values of the frontend and another one which sets up all watch functions. So overall we connect two time to the ODB during fronend_init one time to create the values and one time to create the watch. In the example code a simple version of this setup is shown:
> > > 
> > > INT frontend_init() {
> > > 
> > >   cm_msg(MINFO, "frontend_init() setup", "Test FE");
> > > 
> > >   odb settings = {
> > >     {"Test", 123},
> > >     {"sub", {}}
> > >   };
> > >   settings.connect_and_fix_structure("/Equipment/Test FE/Settings");
> > >   // settings.watch(watch); <-- this works without segmentation fault
> > > 
> > >   odb new_settings("/Equipment/Test FE/Settings");
> > >   new_settings.watch(watch); // <-- here I am getting a segmentation fault
> > > 
> > >   return CM_SUCCESS;
> > > }
> > > 
> > > When I directly set the watch everything runs fine however, when I create a new ODB object and use this one to set a watch I am getting the following segmentation fault:
> > > 
> > > Process 18474 stopped
> > > * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x34)
> > >     frame #0: 0x000000010004fa38 test_fe`midas::odb::watch_callback(hDB=<unavailable>, hKey=<unavailable>, index=0, info=0x00006000002001c0) at odbxx.cxx:96:25 [opt]
> > >    93  	      if (po->m_data == nullptr)
> > >    94  	         mthrow("Callback received for a midas::odb object which went out of scope");
> > >    95  	      midas::odb *poh = search_hkey(po, hKey);
> > > -> 96  	      poh->m_last_index = index;
> > >    97  	      po->m_watch_callback(*poh);
> > >    98  	      poh->m_last_index = -1;
> > >    99  	   }
> > > 
> > > Best,
> > > Marius
  2847   16 Sep 2024 Marius KoeppelBug ReportCrash using ODB watch
This is not the case here. Note that the error message: "Callback received for a midas::odb object which went out of scope" is not called! The segmentation fault happens later line 96.

> The answer is in the error message: „Object went out of scope“. When your frontent_init() exits, the odb objects are destroyed. When you get a callback, it‘s linked to the
> destroyed object. This is like if you have a local string and pass a reference to that string in the return of the function.
> 
> Use a global object (bad) or use „new“ (potential memory leak). I would use a global structure which holds all odb objects.
> 
> Stefan
>  
> > 
> > last week I was running MIDAS with the commit 3ad98c5. Today I updated MIDAS and now all my watch functions are crashing. Attached I have a minimal example frontend of the problem.
> > 
> > In our software we have two functions one which sets up the ODB values of the frontend and another one which sets up all watch functions. So overall we connect two time to the ODB during fronend_init one time to create the values and one time to create the watch. In the example code a simple version of this setup is shown:
> > 
> > INT frontend_init() {
> > 
> >   cm_msg(MINFO, "frontend_init() setup", "Test FE");
> > 
> >   odb settings = {
> >     {"Test", 123},
> >     {"sub", {}}
> >   };
> >   settings.connect_and_fix_structure("/Equipment/Test FE/Settings");
> >   // settings.watch(watch); <-- this works without segmentation fault
> > 
> >   odb new_settings("/Equipment/Test FE/Settings");
> >   new_settings.watch(watch); // <-- here I am getting a segmentation fault
> > 
> >   return CM_SUCCESS;
> > }
> > 
> > When I directly set the watch everything runs fine however, when I create a new ODB object and use this one to set a watch I am getting the following segmentation fault:
> > 
> > Process 18474 stopped
> > * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x34)
> >     frame #0: 0x000000010004fa38 test_fe`midas::odb::watch_callback(hDB=<unavailable>, hKey=<unavailable>, index=0, info=0x00006000002001c0) at odbxx.cxx:96:25 [opt]
> >    93  	      if (po->m_data == nullptr)
> >    94  	         mthrow("Callback received for a midas::odb object which went out of scope");
> >    95  	      midas::odb *poh = search_hkey(po, hKey);
> > -> 96  	      poh->m_last_index = index;
> >    97  	      po->m_watch_callback(*poh);
> >    98  	      poh->m_last_index = -1;
> >    99  	   }
> > 
> > Best,
> > Marius
  2846   16 Sep 2024 Stefan RittBug ReportCrash using ODB watch
The answer is in the error message: „Object went out of scope“. When your frontent_init() exits, the odb objects are destroyed. When you get a callback, it‘s linked to the
destroyed object. This is like if you have a local string and pass a reference to that string in the return of the function.

Use a global object (bad) or use „new“ (potential memory leak). I would use a global structure which holds all odb objects.

Stefan
 
> 
> last week I was running MIDAS with the commit 3ad98c5. Today I updated MIDAS and now all my watch functions are crashing. Attached I have a minimal example frontend of the problem.
> 
> In our software we have two functions one which sets up the ODB values of the frontend and another one which sets up all watch functions. So overall we connect two time to the ODB during fronend_init one time to create the values and one time to create the watch. In the example code a simple version of this setup is shown:
> 
> INT frontend_init() {
> 
>   cm_msg(MINFO, "frontend_init() setup", "Test FE");
> 
>   odb settings = {
>     {"Test", 123},
>     {"sub", {}}
>   };
>   settings.connect_and_fix_structure("/Equipment/Test FE/Settings");
>   // settings.watch(watch); <-- this works without segmentation fault
> 
>   odb new_settings("/Equipment/Test FE/Settings");
>   new_settings.watch(watch); // <-- here I am getting a segmentation fault
> 
>   return CM_SUCCESS;
> }
> 
> When I directly set the watch everything runs fine however, when I create a new ODB object and use this one to set a watch I am getting the following segmentation fault:
> 
> Process 18474 stopped
> * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x34)
>     frame #0: 0x000000010004fa38 test_fe`midas::odb::watch_callback(hDB=<unavailable>, hKey=<unavailable>, index=0, info=0x00006000002001c0) at odbxx.cxx:96:25 [opt]
>    93  	      if (po->m_data == nullptr)
>    94  	         mthrow("Callback received for a midas::odb object which went out of scope");
>    95  	      midas::odb *poh = search_hkey(po, hKey);
> -> 96  	      poh->m_last_index = index;
>    97  	      po->m_watch_callback(*poh);
>    98  	      poh->m_last_index = -1;
>    99  	   }
> 
> Best,
> Marius
  2845   16 Sep 2024 Marius KöppelBug ReportCrash using ODB watch
Hi all,

last week I was running MIDAS with the commit 3ad98c5. Today I updated MIDAS and now all my watch functions are crashing. Attached I have a minimal example frontend of the problem.

In our software we have two functions one which sets up the ODB values of the frontend and another one which sets up all watch functions. So overall we connect two time to the ODB during fronend_init one time to create the values and one time to create the watch. In the example code a simple version of this setup is shown:

INT frontend_init() {

  cm_msg(MINFO, "frontend_init() setup", "Test FE");

  odb settings = {
    {"Test", 123},
    {"sub", {}}
  };
  settings.connect_and_fix_structure("/Equipment/Test FE/Settings");
  // settings.watch(watch); <-- this works without segmentation fault

  odb new_settings("/Equipment/Test FE/Settings");
  new_settings.watch(watch); // <-- here I am getting a segmentation fault

  return CM_SUCCESS;
}

When I directly set the watch everything runs fine however, when I create a new ODB object and use this one to set a watch I am getting the following segmentation fault:

Process 18474 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x34)
    frame #0: 0x000000010004fa38 test_fe`midas::odb::watch_callback(hDB=<unavailable>, hKey=<unavailable>, index=0, info=0x00006000002001c0) at odbxx.cxx:96:25 [opt]
   93  	      if (po->m_data == nullptr)
   94  	         mthrow("Callback received for a midas::odb object which went out of scope");
   95  	      midas::odb *poh = search_hkey(po, hKey);
-> 96  	      poh->m_last_index = index;
   97  	      po->m_watch_callback(*poh);
   98  	      poh->m_last_index = -1;
   99  	   }

Best,
Marius
Attachment 1: test_fe.cpp
#include <algorithm>
#include <math.h>
#include <random>
#include <array>
#include <string>
#include <stdio.h>
#include <stdlib.h>

#include "midas.h"
#include "msystem.h"
#include "odbxx.h"
#include "mfe.h"

using namespace std;
using midas::odb;


/*-- Globals -------------------------------------------------------*/
/* The frontend name (client name) as seen by other MIDAS clients   */
const char *frontend_name = "Test FE";
/* The frontend file name, don't change it */
const char *frontend_file_name = __FILE__;

/* frontend_loop is called periodically if this variable is TRUE    */
BOOL frontend_call_loop = FALSE;

/* a frontend status page is displayed with this frequency in ms    */
INT display_period = 0;

/* maximum event size produced by this frontend */
INT max_event_size = 32 * (1024 * 1024); // 32MiB

/* maximum event size for fragmented events (EQ_FRAGMENTED) */
INT max_event_size_frag = 5 * 1024 * 1024;

/* buffer size to hold events */
INT event_buffer_size = 4 * max_event_size;

BOOL equipment_common_overwrite = TRUE;//true is overwriting the common odb

/*-- Function declarations -----------------------------------------*/
INT read_odb(char * pevent, INT);
void watch(odb o);
/*-- Equipment list ------------------------------------------------*/

EQUIPMENT equipment[] = {

  {
    "Test FE",      /* equipment name */
    {1, 0,          /* event ID, trigger mask */
    "SYSTEM",      /* event buffer */
    EQ_PERIODIC,   /* equipment type */
    0,             /* event source */
    "MIDAS",       /* format */
    TRUE,          /* enabled */
    RO_ALWAYS | RO_ODB,  /* read always and update ODB */
    1000,          /* read every 1 sec */
    0,             /* stop run after this event limit */
    0,             /* number of sub events */
    0,             /* log history every event */
    "", "", ""},
    read_odb,       /* readout routine */
  },

  {""}};

/*-- Dummy routines ------------------------------------------------*/

INT poll_event(INT, INT count, BOOL test)  {
    return 0;
}

INT interrupt_configure(INT, INT, POINTER_T) {
    return 1;
}
/*-- Frontend Init -------------------------------------------------*/

INT frontend_init() {

  cm_msg(MINFO, "frontend_init() setup", "Test FE");

  odb settings = {
    {"Test", 123},
    {"sub", {}}
  };
  settings.connect_and_fix_structure("/Equipment/Test FE/Settings");
  // settings.watch(watch); <-- this works without segmentation fault

  odb new_settings("/Equipment/Test FE/Settings");
  new_settings.watch(watch); // <-- here I am getting a segmentation fault

  return CM_SUCCESS;
}

void watch(odb o) {
    std::string name = o.get_name();

    if (name == "Test") {
      printf("I am a watch on Test\n");
    }
}

/*-- Frontend Exit -------------------------------------------------*/

INT frontend_exit() {
  return CM_SUCCESS;
}

/*-- Frontend Loop -------------------------------------------------*/

INT frontend_loop() {
  return CM_SUCCESS;
}

/*-- Begin of Run --------------------------------------------------*/

INT begin_of_run(INT run_number, char *error) {
  return CM_SUCCESS;
}

/*-- End of Run ----------------------------------------------------*/

INT end_of_run(INT run_number, char *error) {
  return CM_SUCCESS;
}

/*-- Pause Run -----------------------------------------------------*/

INT pause_run(INT run_number, char *error) {
  return CM_SUCCESS;
}

/*-- Resume Run ----------------------------------------------------*/

INT resume_run(INT run_number, char *error) {
  return CM_SUCCESS;
}

/* -- Readout --*/
INT read_odb(char *pevent, INT){

    return 0;

}
  2844   13 Sep 2024 Konstantin OlchanskiSuggestionClean up compiler warning in manalyzer
> This is a super small pull request, simple replace deprecated sprintf with snprintf
> https://bitbucket.org/tmidas/manalyzer/pull-requests/9

sprintf() is not deprecated and "char buf[256]; sprintf(buf, "%05d", 64-bit-int);" is safe, will never overflow.

we could bulk-convert all these sprintf() to snprintf() but I would rather wait for this:

https://en.cppreference.com/w/cpp/utility/format/format

let me think on this for a bit.

K.O.
ELOG V3.1.4-2e1708b5