Back Midas Rome Roody Rootana
  Midas DAQ System, Page 2 of 47  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
Entry  22 Sep 2024, Tam Kai Chung, Bug Report, Can we convert the .mid file into .root file 
Dear experts, 
I am a new user of MIDAS. I have just created some banks by a frontend.cxx code.
Now, I would like to do some analysis from the data.

I have an analyzer.cxx code (A very simple one without complicated routine).

I try to link the analyzer.o with rmana.o and libmidas.a to create analyzer.exe

I am not sure whether I can do the analysis offline in the follow way:

analyzer.exe -i run00001.mid -o run00001.root

When I run this command,  I get the following error:

Error in <TClass::LoadClassInfo>: no interpreter information for class TSocket is available even though it has a TClass initialization routine.

I am using root 6.30

Any suggestion about this issue? Thank you.

Best,
Terry
    Reply  24 Sep 2024, Konstantin Olchanski, Bug Report, Can we convert the .mid file into .root file 
"Can we convert the .mid file into .root file".

yes, you can, but the operation is under-defined. it's like asking "can I convert these stones into houses". the answer is "yes", but it involves 
more than running a universal conversion program.

For this reason, I recommend against converting midas files "to root". for some types of midas data such a conversion makes no sense (i.e. alpha-g 
streamed udp packets with chopped compressed waveforms).

I recommend that you analyze you data in the midas analyzer. You can start with manalyzer_example_root.cxx,
it shows how to create a ROOT histogram, how to access midas event bank data and call the TH1 "Fill" method.

Instead of filling histograms in the analyzer, you can create a ROOT TTree and fill it with data from midas data banks,
effectively you will create your own custom converter from midas to root.

The key thing is that it has to be a custom converter, because only you know the meaning of midas bank data
and how it should be best stored in a root tree.

K.O.
Entry  04 Sep 2024, Stefan Ritt, Info, News MSCB++ API 
I had two free afternoon and took the opportunity to write a new API for the MSCB 
system. I'm not sure if anybody else actually uses MSCB (MIDAS slow control bus), 
but anyhow. 

The new API is contained in a single header file mscbxx.h, and it's extremely 
simple to use. Here is some example code:

#include "mscbxx.h"

...
   // connect to node 10 at submaster mscb123
   midas::mscb m("mscb123", 10);

   // print node info and all variables
   std::cout << m << std::endl;

   // refresh all variables (read from MSCB device)
   m.read_range();
   
   // access individual variables
   float f = m[5];   // index access
   f = m["In0"];     // name access

   // write value to MSCB device
   m["In0"] = 1.234;
...


Any feedback is welcome.

Stefan
    Reply  11 Sep 2024, Konstantin Olchanski, Info, News MSCB++ API 
> Here is some example code:
> 
> #include "mscbxx.h"
>    f = m["In0"];     // name access
>    m["In0"] = 1.234;
> Any feedback is welcome.

Where is the example of error handling?

K.O.
       Reply  24 Sep 2024, Stefan Ritt, Info, News MSCB++ API 
> Where is the example of error handling?

#include "mscbxx.h"
#include "mexcept.h"

...
   try {
   
      // connect to node 10 at submaster mscb123
      midas::mscb m("mscb123", 10);

      // print a variable
      std::cout << m["Input0"] << std::endl;
   
   } catch (mexception e) {
      std::cout << e << std::endl; // simply print exception
   }
...
Entry  16 Sep 2024, Marius Köppel, Bug Report, Crash using ODB watch test_fe.cpp
Hi all,

last week I was running MIDAS with the commit 3ad98c5. Today I updated MIDAS and now all my watch functions are crashing. Attached I have a minimal example frontend of the problem.

In our software we have two functions one which sets up the ODB values of the frontend and another one which sets up all watch functions. So overall we connect two time to the ODB during fronend_init one time to create the values and one time to create the watch. In the example code a simple version of this setup is shown:

INT frontend_init() {

  cm_msg(MINFO, "frontend_init() setup", "Test FE");

  odb settings = {
    {"Test", 123},
    {"sub", {}}
  };
  settings.connect_and_fix_structure("/Equipment/Test FE/Settings");
  // settings.watch(watch); <-- this works without segmentation fault

  odb new_settings("/Equipment/Test FE/Settings");
  new_settings.watch(watch); // <-- here I am getting a segmentation fault

  return CM_SUCCESS;
}

When I directly set the watch everything runs fine however, when I create a new ODB object and use this one to set a watch I am getting the following segmentation fault:

Process 18474 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x34)
    frame #0: 0x000000010004fa38 test_fe`midas::odb::watch_callback(hDB=<unavailable>, hKey=<unavailable>, index=0, info=0x00006000002001c0) at odbxx.cxx:96:25 [opt]
   93  	      if (po->m_data == nullptr)
   94  	         mthrow("Callback received for a midas::odb object which went out of scope");
   95  	      midas::odb *poh = search_hkey(po, hKey);
-> 96  	      poh->m_last_index = index;
   97  	      po->m_watch_callback(*poh);
   98  	      poh->m_last_index = -1;
   99  	   }

Best,
Marius
    Reply  16 Sep 2024, Stefan Ritt, Bug Report, Crash using ODB watch 
The answer is in the error message: „Object went out of scope“. When your frontent_init() exits, the odb objects are destroyed. When you get a callback, it‘s linked to the
destroyed object. This is like if you have a local string and pass a reference to that string in the return of the function.

Use a global object (bad) or use „new“ (potential memory leak). I would use a global structure which holds all odb objects.

Stefan
 
> 
> last week I was running MIDAS with the commit 3ad98c5. Today I updated MIDAS and now all my watch functions are crashing. Attached I have a minimal example frontend of the problem.
> 
> In our software we have two functions one which sets up the ODB values of the frontend and another one which sets up all watch functions. So overall we connect two time to the ODB during fronend_init one time to create the values and one time to create the watch. In the example code a simple version of this setup is shown:
> 
> INT frontend_init() {
> 
>   cm_msg(MINFO, "frontend_init() setup", "Test FE");
> 
>   odb settings = {
>     {"Test", 123},
>     {"sub", {}}
>   };
>   settings.connect_and_fix_structure("/Equipment/Test FE/Settings");
>   // settings.watch(watch); <-- this works without segmentation fault
> 
>   odb new_settings("/Equipment/Test FE/Settings");
>   new_settings.watch(watch); // <-- here I am getting a segmentation fault
> 
>   return CM_SUCCESS;
> }
> 
> When I directly set the watch everything runs fine however, when I create a new ODB object and use this one to set a watch I am getting the following segmentation fault:
> 
> Process 18474 stopped
> * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x34)
>     frame #0: 0x000000010004fa38 test_fe`midas::odb::watch_callback(hDB=<unavailable>, hKey=<unavailable>, index=0, info=0x00006000002001c0) at odbxx.cxx:96:25 [opt]
>    93  	      if (po->m_data == nullptr)
>    94  	         mthrow("Callback received for a midas::odb object which went out of scope");
>    95  	      midas::odb *poh = search_hkey(po, hKey);
> -> 96  	      poh->m_last_index = index;
>    97  	      po->m_watch_callback(*poh);
>    98  	      poh->m_last_index = -1;
>    99  	   }
> 
> Best,
> Marius
       Reply  16 Sep 2024, Marius Koeppel, Bug Report, Crash using ODB watch 
This is not the case here. Note that the error message: "Callback received for a midas::odb object which went out of scope" is not called! The segmentation fault happens later line 96.

> The answer is in the error message: „Object went out of scope“. When your frontent_init() exits, the odb objects are destroyed. When you get a callback, it‘s linked to the
> destroyed object. This is like if you have a local string and pass a reference to that string in the return of the function.
> 
> Use a global object (bad) or use „new“ (potential memory leak). I would use a global structure which holds all odb objects.
> 
> Stefan
>  
> > 
> > last week I was running MIDAS with the commit 3ad98c5. Today I updated MIDAS and now all my watch functions are crashing. Attached I have a minimal example frontend of the problem.
> > 
> > In our software we have two functions one which sets up the ODB values of the frontend and another one which sets up all watch functions. So overall we connect two time to the ODB during fronend_init one time to create the values and one time to create the watch. In the example code a simple version of this setup is shown:
> > 
> > INT frontend_init() {
> > 
> >   cm_msg(MINFO, "frontend_init() setup", "Test FE");
> > 
> >   odb settings = {
> >     {"Test", 123},
> >     {"sub", {}}
> >   };
> >   settings.connect_and_fix_structure("/Equipment/Test FE/Settings");
> >   // settings.watch(watch); <-- this works without segmentation fault
> > 
> >   odb new_settings("/Equipment/Test FE/Settings");
> >   new_settings.watch(watch); // <-- here I am getting a segmentation fault
> > 
> >   return CM_SUCCESS;
> > }
> > 
> > When I directly set the watch everything runs fine however, when I create a new ODB object and use this one to set a watch I am getting the following segmentation fault:
> > 
> > Process 18474 stopped
> > * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x34)
> >     frame #0: 0x000000010004fa38 test_fe`midas::odb::watch_callback(hDB=<unavailable>, hKey=<unavailable>, index=0, info=0x00006000002001c0) at odbxx.cxx:96:25 [opt]
> >    93  	      if (po->m_data == nullptr)
> >    94  	         mthrow("Callback received for a midas::odb object which went out of scope");
> >    95  	      midas::odb *poh = search_hkey(po, hKey);
> > -> 96  	      poh->m_last_index = index;
> >    97  	      po->m_watch_callback(*poh);
> >    98  	      poh->m_last_index = -1;
> >    99  	   }
> > 
> > Best,
> > Marius
          Reply  16 Sep 2024, Stefan Ritt, Bug Report, Crash using ODB watch 
Well, the object *went* out of scope. For my code it‘s hard to realize this, so the error reporting is poor. Also the first object should have the same
problem. Just by accident that it does not crash.

Stefan 

> This is not the case here. Note that the error message: "Callback received for a midas::odb object which went out of scope" is not called! The segmentation fault happens later line 96.
> 
> > The answer is in the error message: „Object went out of scope“. When your frontent_init() exits, the odb objects are destroyed. When you get a callback, it‘s linked to the
> > destroyed object. This is like if you have a local string and pass a reference to that string in the return of the function.
> > 
> > Use a global object (bad) or use „new“ (potential memory leak). I would use a global structure which holds all odb objects.
> > 
> > Stefan
> >  
> > > 
> > > last week I was running MIDAS with the commit 3ad98c5. Today I updated MIDAS and now all my watch functions are crashing. Attached I have a minimal example frontend of the problem.
> > > 
> > > In our software we have two functions one which sets up the ODB values of the frontend and another one which sets up all watch functions. So overall we connect two time to the ODB during fronend_init one time to create the values and one time to create the watch. In the example code a simple version of this setup is shown:
> > > 
> > > INT frontend_init() {
> > > 
> > >   cm_msg(MINFO, "frontend_init() setup", "Test FE");
> > > 
> > >   odb settings = {
> > >     {"Test", 123},
> > >     {"sub", {}}
> > >   };
> > >   settings.connect_and_fix_structure("/Equipment/Test FE/Settings");
> > >   // settings.watch(watch); <-- this works without segmentation fault
> > > 
> > >   odb new_settings("/Equipment/Test FE/Settings");
> > >   new_settings.watch(watch); // <-- here I am getting a segmentation fault
> > > 
> > >   return CM_SUCCESS;
> > > }
> > > 
> > > When I directly set the watch everything runs fine however, when I create a new ODB object and use this one to set a watch I am getting the following segmentation fault:
> > > 
> > > Process 18474 stopped
> > > * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x34)
> > >     frame #0: 0x000000010004fa38 test_fe`midas::odb::watch_callback(hDB=<unavailable>, hKey=<unavailable>, index=0, info=0x00006000002001c0) at odbxx.cxx:96:25 [opt]
> > >    93  	      if (po->m_data == nullptr)
> > >    94  	         mthrow("Callback received for a midas::odb object which went out of scope");
> > >    95  	      midas::odb *poh = search_hkey(po, hKey);
> > > -> 96  	      poh->m_last_index = index;
> > >    97  	      po->m_watch_callback(*poh);
> > >    98  	      poh->m_last_index = -1;
> > >    99  	   }
> > > 
> > > Best,
> > > Marius
             Reply  16 Sep 2024, Marius Koeppel, Bug Report, Crash using ODB watch 
Okay, but this is then a big issue IMO. For Mu3e we do this in every frontend and I also checked again all of these watches are broken at the moment (with commit 3ad98c5 they worked).
 
In the old style we did for example (see https://bitbucket.org/tmidas/midas/src/develop/examples/crfe/crfe.cxx):

INT frontend_init()
{
   HNDLE hKey;

   // create Settings structure in ODB
   db_create_record(hDB, 0, "Equipment/Clock Reset/Settings", strcomb1(cr_settings_str).c_str());
   db_find_key(hDB, 0, "/Equipment/Clock Reset", &hKey);
   assert(hKey);

   db_watch(hDB, hKey, cr_settings_changed, NULL);

   /*
    * Set our transition sequence. The default is 500. Setting it
    * to 600 means we are called AFTER most other clients.
    */
   cm_set_transition_sequence(TR_START, 600);

   return CM_SUCCESS;
}

I thought this will be the same (under the hood) in the current odbxx way via:

odb settings("Equipment/Clock Reset/Settings");
settings.watch(cr_settings_changed);

Best,
Marius


> Well, the object *went* out of scope. For my code it‘s hard to realize this, so the error reporting is poor. Also the first object should have the same
> problem. Just by accident that it does not crash.
> 
> Stefan 
> 
> > This is not the case here. Note that the error message: "Callback received for a midas::odb object which went out of scope" is not called! The segmentation fault happens later line 96.
> > 
> > > The answer is in the error message: „Object went out of scope“. When your frontent_init() exits, the odb objects are destroyed. When you get a callback, it‘s linked to the
> > > destroyed object. This is like if you have a local string and pass a reference to that string in the return of the function.
> > > 
> > > Use a global object (bad) or use „new“ (potential memory leak). I would use a global structure which holds all odb objects.
> > > 
> > > Stefan
> > >  
> > > > 
> > > > last week I was running MIDAS with the commit 3ad98c5. Today I updated MIDAS and now all my watch functions are crashing. Attached I have a minimal example frontend of the problem.
> > > > 
> > > > In our software we have two functions one which sets up the ODB values of the frontend and another one which sets up all watch functions. So overall we connect two time to the ODB during fronend_init one time to create the values and one time to create the watch. In the example code a simple version of this setup is shown:
> > > > 
> > > > INT frontend_init() {
> > > > 
> > > >   cm_msg(MINFO, "frontend_init() setup", "Test FE");
> > > > 
> > > >   odb settings = {
> > > >     {"Test", 123},
> > > >     {"sub", {}}
> > > >   };
> > > >   settings.connect_and_fix_structure("/Equipment/Test FE/Settings");
> > > >   // settings.watch(watch); <-- this works without segmentation fault
> > > > 
> > > >   odb new_settings("/Equipment/Test FE/Settings");
> > > >   new_settings.watch(watch); // <-- here I am getting a segmentation fault
> > > > 
> > > >   return CM_SUCCESS;
> > > > }
> > > > 
> > > > When I directly set the watch everything runs fine however, when I create a new ODB object and use this one to set a watch I am getting the following segmentation fault:
> > > > 
> > > > Process 18474 stopped
> > > > * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x34)
> > > >     frame #0: 0x000000010004fa38 test_fe`midas::odb::watch_callback(hDB=<unavailable>, hKey=<unavailable>, index=0, info=0x00006000002001c0) at odbxx.cxx:96:25 [opt]
> > > >    93  	      if (po->m_data == nullptr)
> > > >    94  	         mthrow("Callback received for a midas::odb object which went out of scope");
> > > >    95  	      midas::odb *poh = search_hkey(po, hKey);
> > > > -> 96  	      poh->m_last_index = index;
> > > >    97  	      po->m_watch_callback(*poh);
> > > >    98  	      poh->m_last_index = -1;
> > > >    99  	   }
> > > > 
> > > > Best,
> > > > Marius
                Reply  16 Sep 2024, Mark Grimes, Bug Report, Crash using ODB watch 
Hi,
Maybe I've misunderstood the code, but odb::watch() creates a deep copy of itself to set the watch to.  The comment where this happens specifies that this is in case the current one goes out of scope.  See https://bitbucket.org/tmidas/midas/src/2878647fb73648474b35223ce53a125180f751b3/src/odbxx.cxx#lines-1393:1395
So as far as I can tell allowing the current odb instance to go out of scope is supported.

Thanks,

Mark.


> Okay, but this is then a big issue IMO. For Mu3e we do this in every frontend and I also checked again all of these watches are broken at the moment (with commit 3ad98c5 they worked).
>  
> In the old style we did for example (see https://bitbucket.org/tmidas/midas/src/develop/examples/crfe/crfe.cxx):
> 
> INT frontend_init()
> {
>    HNDLE hKey;
> 
>    // create Settings structure in ODB
>    db_create_record(hDB, 0, "Equipment/Clock Reset/Settings", strcomb1(cr_settings_str).c_str());
>    db_find_key(hDB, 0, "/Equipment/Clock Reset", &hKey);
>    assert(hKey);
> 
>    db_watch(hDB, hKey, cr_settings_changed, NULL);
> 
>    /*
>     * Set our transition sequence. The default is 500. Setting it
>     * to 600 means we are called AFTER most other clients.
>     */
>    cm_set_transition_sequence(TR_START, 600);
> 
>    return CM_SUCCESS;
> }
> 
> I thought this will be the same (under the hood) in the current odbxx way via:
> 
> odb settings("Equipment/Clock Reset/Settings");
> settings.watch(cr_settings_changed);
> 
> Best,
> Marius
> 
> 
> > Well, the object *went* out of scope. For my code it‘s hard to realize this, so the error reporting is poor. Also the first object should have the same
> > problem. Just by accident that it does not crash.
> > 
> > Stefan 
> > 
> > > This is not the case here. Note that the error message: "Callback received for a midas::odb object which went out of scope" is not called! The segmentation fault happens later line 96.
> > > 
> > > > The answer is in the error message: „Object went out of scope“. When your frontent_init() exits, the odb objects are destroyed. When you get a callback, it‘s linked to the
> > > > destroyed object. This is like if you have a local string and pass a reference to that string in the return of the function.
> > > > 
> > > > Use a global object (bad) or use „new“ (potential memory leak). I would use a global structure which holds all odb objects.
> > > > 
> > > > Stefan
> > > >  
> > > > > 
> > > > > last week I was running MIDAS with the commit 3ad98c5. Today I updated MIDAS and now all my watch functions are crashing. Attached I have a minimal example frontend of the problem.
> > > > > 
> > > > > In our software we have two functions one which sets up the ODB values of the frontend and another one which sets up all watch functions. So overall we connect two time to the ODB during fronend_init one time to create the values and one time to create the watch. In the example code a simple version of this setup is shown:
> > > > > 
> > > > > INT frontend_init() {
> > > > > 
> > > > >   cm_msg(MINFO, "frontend_init() setup", "Test FE");
> > > > > 
> > > > >   odb settings = {
> > > > >     {"Test", 123},
> > > > >     {"sub", {}}
> > > > >   };
> > > > >   settings.connect_and_fix_structure("/Equipment/Test FE/Settings");
> > > > >   // settings.watch(watch); <-- this works without segmentation fault
> > > > > 
> > > > >   odb new_settings("/Equipment/Test FE/Settings");
> > > > >   new_settings.watch(watch); // <-- here I am getting a segmentation fault
> > > > > 
> > > > >   return CM_SUCCESS;
> > > > > }
> > > > > 
> > > > > When I directly set the watch everything runs fine however, when I create a new ODB object and use this one to set a watch I am getting the following segmentation fault:
> > > > > 
> > > > > Process 18474 stopped
> > > > > * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x34)
> > > > >     frame #0: 0x000000010004fa38 test_fe`midas::odb::watch_callback(hDB=<unavailable>, hKey=<unavailable>, index=0, info=0x00006000002001c0) at odbxx.cxx:96:25 [opt]
> > > > >    93  	      if (po->m_data == nullptr)
> > > > >    94  	         mthrow("Callback received for a midas::odb object which went out of scope");
> > > > >    95  	      midas::odb *poh = search_hkey(po, hKey);
> > > > > -> 96  	      poh->m_last_index = index;
> > > > >    97  	      po->m_watch_callback(*poh);
> > > > >    98  	      poh->m_last_index = -1;
> > > > >    99  	   }
> > > > > 
> > > > > Best,
> > > > > Marius
       Reply  17 Sep 2024, Konstantin Olchanski, Bug Report, Crash using ODB watch 
> {
> odb new_settings("/Equipment/Test FE/Settings");
> new_settings.watch(watch); // <-- here I am getting a segmentation fault
> }

this code has a bug. "watch" is attached to object "new_settings" that is deleted
after the closing curly bracket.

I would say Stefan's odb API should not allow you to write code like this. an API defect.

K.O.
          Reply  18 Sep 2024, Marius Koeppel, Bug Report, Crash using ODB watch 
I created a PR to fix this issue https://bitbucket.org/tmidas/midas/pull-requests/42.
The crash happened since the change in commit 3ad98c5 always got the ODB via XML.
However, the creation from XML should only be used when a user wants to read fast (and when we are on a remote machine) so I added the flag use_from_xml to explicitly specify this.


> > {
> > odb new_settings("/Equipment/Test FE/Settings");
> > new_settings.watch(watch); // <-- here I am getting a segmentation fault
> > }
> 
> this code has a bug. "watch" is attached to object "new_settings" that is deleted
> after the closing curly bracket.

> I would say Stefan's odb API should not allow you to write code like this. an API defect.

As pointed out in the thread this feature is explicitly supported by odbxx.cxx:

void odb::watch(std::function<void(midas::odb &)> f) {
      if (m_hKey == 0 || m_hKey == -1)
         mthrow("watch() called for ODB key \"" + m_name +
                "\" which is not connected to ODB");

      // create a deep copy of current object in case it
      // goes out of scope
      midas::odb* ow = new midas::odb(*this);

      ow->m_watch_callback = f;
      db_watch(s_hDB, m_hKey, midas::odb::watch_callback, ow);

      // put object into watchlist
      g_watchlist.push_back(ow);
}

Also in the old way (see for example https://bitbucket.org/tmidas/midas/src/191d13f98626fae533cbca17b00df7ee361edf16/examples/crfe/crfe.cxx#lines-126) it was possible to create a watch in a scope without the user taking care that the "object" does not go out of scope.
I think this feature should be supported by the framework.

Best,
Marius
             Reply  20 Sep 2024, Stefan Ritt, Bug Report, Crash using ODB watch 
The problem has been fixed in the current version. Here is my analysis:

- the midas::odb object *can* go out of scope in the function, since the odb::watch() function creates a deep copy of the object. 
This does not cause a memory leak if one call odb::unwatch_all() at the end of a program.

- The creation from XML had a flaw where the ODB key handle ("hKey") is not initialized since it is not passed by the db_copy_xml() function.
I added code to db_copy_xml() to also fetch the key handle in the XML file, which now fixes the issue. Please note that you have to
update both the server and client side of midas to get this functionality if you are using it by a remote client.

- I saw the flag MK added on his pull request to the constructor of odb::odb(). This is a way to fight the symptoms (by creating an
object the "old" way if not otherwise needed, but how we have the cause cured. Nevertheless I added that parameter, but set to to true by default:

   odb::odb(const std::string &str, bool init_via_xml = true);

since this should be fully working now and should always be faster than the old method. I only keep it for debugging should we observe
another flaw in odb_from_xml(). 

Best regards,
Stefan
Entry  26 Jul 2024, Lukas Gerritzen, Bug Fix, strlcpy and strlcat added to glibc 2.38 
A year ago, these two function were included in glibc. If trying to compile midas with a recent version of
Ubuntu or Fedora, one gets errors like this:
/usr/include/string.h:506:15: error: declaration of ‘size_t strlcpy(char*, const char*, size_t) noexcept’ has a 
different exception specifier
  506 | extern size_t strlcpy (char *__restrict __dest,
      |               ^~~~~~~
In file included from /home/luk/midas/src/midas.cxx:14:
/home/luk/midas/include/midas.h:2190:17: note: from previous declaration ‘size_t strlcpy(char*, const 
char*, size_t)’

My proposed solution is a check in midas.h around line 248:
#if (__GLIBC__ > 2) || (__GLIBC__ == 2 && __GLIBC_MINOR__ >= 38)
#ifndef HAVE_STRLCPY
#define HAVE_STRLCPY 1
#endif
#endif
    Reply  26 Jul 2024, Stefan Ritt, Bug Fix, strlcpy and strlcat added to glibc 2.38 
Good catch. I added your code to the current develop branch of MIDAS.

Stefan
       Reply  13 Sep 2024, Konstantin Olchanski, Bug Fix, mstrcpy, was: strlcpy and strlcat added to glibc 2.38 
for the record, as ultimate solution, strlcpy() and strlcat() were wholesale 
replaced by mstrlcpy() and mstrlcat(). this should fix "missing strlcpy()" 
problem for good and make midas more consistent across all platforms (including 
non-linux, non-unix). on my side, I continue replacing these function with proper 
std::string operations. K.O.
Entry  04 Jul 2024, Nick Hastings, Forum, mfe.cxx with RO_STOPPED and EQ_POLLED 
Dear Midas experts,

I noticed that a check was added to mfe.cxx in 1961af0d6:

+      /* check for consistent common settings */
+      if ((eq_info->read_on & RO_STOPPED) &&
+          (eq_info->eq_type == EQ_POLLED ||
+           eq_info->eq_type == EQ_INTERRUPT ||
+           eq_info->eq_type == EQ_MULTITHREAD ||
+           eq_info->eq_type == EQ_USER)) {
+         cm_msg(MERROR, "register_equipment", "Events \"%s\" cannot be read when run is stopped (RO_STOPPED flag)", equipment[idx].name);
+         return 0;
+      }

This commit was by Stefan in May 2022.

A commit few days later, 28d9c96bd, removed the "return 0;", and updated the
error message to:

"Equipment \"%s\" contains RO_STOPPED or RO_ALWAYS. This can lead to undesired side-effect and should be removed."

So such FEs can run but there is still an error at start up. The 
documentation at https://daq00.triumf.ca/MidasWiki/index.php/ReadOn_Flags
states with RO_STOPPED "Readout Occurs" "Before stopping run".
Which seems to indicate that the removing the RO_STOPPED bit from a SC FE
would just result in an additional read not happening just prior to a run
stop. However reading scheduler() in mfe.cxx I see in the the main loop:

 if (run_state == STATE_STOPPED && (eq_info->read_on & RO_STOPPED) == 0) 
    continue;

So it seems to me that the a EQ_PERIODIC equipment needs RO_STOPPED to be set
otherwise it will not read out data while there is no DAQ run.

Can someone explain the purpose of this check and error message? Perhaps it
was put in place with only DAQ FEs, not SC FEs in mind? And should the 
documentation in the wiki actually be "s/Before stopping run/While run is stopped/"?

Thanks,

Nick.
    Reply  06 Aug 2024, Stefan Ritt, Forum, mfe.cxx with RO_STOPPED and EQ_POLLED 
> I noticed that a check was added to mfe.cxx in 1961af0d6:
> 
> +      /* check for consistent common settings */
> +      if ((eq_info->read_on & RO_STOPPED) &&
> +          (eq_info->eq_type == EQ_POLLED ||
> +           eq_info->eq_type == EQ_INTERRUPT ||
> +           eq_info->eq_type == EQ_MULTITHREAD ||
> +           eq_info->eq_type == EQ_USER)) {
> +         cm_msg(MERROR, "register_equipment", "Events \"%s\" cannot be read when run is stopped (RO_STOPPED flag)", equipment[idx].name);
> +         return 0;
> +      }
> 
> 
> Can someone explain the purpose of this check and error message? Perhaps it
> was put in place with only DAQ FEs, not SC FEs in mind? And should the 
> documentation in the wiki actually be "s/Before stopping run/While run is stopped/"?

Indeed you have two types of events handled by mfe.cxx: Slow control events (EQ_SLOW or EQ_PRIODIC) and triggered events (EQ_POLLED or 
EQ_INTERRUPT or EQ_MULTITHREAD or EQ_USER). For slow control events it can make sense to read them also when the run is stopped, that's why you 
can specify RO_STOPPED or RO_ALWAYS. This does however not make sense for triggered events. Reading triggered events when the run is stopped 
invalidates the concept of runs (= read triggered events only during a run). We had cases where people mixed this up, so the warning was added. 
If you have a slow control event you want to read when the run is stopped, make sure it is of type EQ_SLOW or EQ_PERIODIC.

Stefan
       Reply  13 Sep 2024, Konstantin Olchanski, Bug Report, mfe.cxx with RO_STOPPED and EQ_POLLED 
> > I noticed that a check was added to mfe.cxx in 1961af0d6:

This is the reason I recommend against using mfe.c based frontends. There was never any
proper documentation on how they work and what different settings in ODB common
and elsewhere do. My attempts to document it by reverse-engineering were only partially
successful. Since then a number of changes was made that were also hard-to-impossible
to document.

I recommend that all use the new c++ tmfe frontend, which was designed for easy documentation,
and explanation. See tmfe.md for full documentation.

(pending improvements is to integrate TMEvent support, add the data-transmit thread and event fifo).

K.O.
Entry  13 Sep 2024, Konstantin Olchanski, Bug Fix, rootana bitbucket build fixed 
rootana bitbucket build is fixed, only a few minor build problems. I am using the 
root official docker image (which turned out to not work right out of the box 
becuase of missing libvdt-dev package). K.O.
Entry  12 Sep 2024, Konstantin Olchanski, Bug Fix, bitbucket builds repaired 
bitbucket builds work again, also added ubuntu-24 and almalinux-9.

two problems fixed:
- cmake file in examples/experiment was replaced by a non-working version
- unannounced change of strlcpy() to mstrlcpy() broke "make remoteonly"

P.S. I should also fix the rootana and the roody bitbucket builds.

K.O.
Entry  08 Aug 2024, Stefan Ritt, Info, mana.cxx 
We are considering to remove the analyzer framework mana.cxx from MIDAS. It 
currently has some compiler warnings and we wonder if we should fix them which 
would take some time or just remove the file. We have now to much more modern 
analyzer frameworks "manalyzer" and "ROOTANA" which should be used instead.

Is anybody still using the mana.cxx framework?

/Stefan
    Reply  23 Aug 2024, Stefan Ritt, Info, mana.cxx 
Ok, no relevant complains so far, so I removed mana and rmana from the CMake build 
process, but left the file mana.cxx still in the repository for educational 
purposes ;-)

Stefan
       Reply  11 Sep 2024, Konstantin Olchanski, Info, mana.cxx 
> Ok, no relevant complains so far, so I removed mana and rmana from the CMake build 
> process, but left the file mana.cxx still in the repository for educational 
> purposes ;-)

+1

K.O.
Entry  25 Aug 2024, Adrian Fisher, Info, Help parsing scdms_v1 data? 
Hi! I'm working on creating a ksy file to help with parsing some data, but I'm having trouble finding some information. Right now, I have it set up very rudimentary - it grabs the event header and then uses the data bank size to grab the size of the data, but then I'm needing additional padding after the data bank to reach the next event.
However, there's some irregularity in the "padding" between data banks that I haven't been able to find any documentation for. For some reason, after the data banks, there's sections of data of either 168 or 192 bytes, and it's seemingly arbitrary which size is used.
I'm just wondering if anyone has any information about this so that I'd be able to make some more progress in parsing the data.
The data I'm working with can be found at https://github.com/det-lab/dataReaderWriter/blob/master/data/07180808_1735_F0001.mid.gz
And the ksy file that I've created so far is at https://github.com/det-lab/dataReaderWriter/blob/master/kaitai/ksy/scdms_v1.ksy

There's also a block of data after the odb that runs for 384 bytes that I'm unsure the purpose of, if anyone could point me to some information about that.

Thank you!
    Reply  26 Aug 2024, Stefan Ritt, Info, Help parsing scdms_v1 data? 
The MIDAS event format is described here:

https://daq00.triumf.ca/MidasWiki/index.php/Event_Structure

All banks are aligned on a 8-byte boundary, so that one has effective 64-bit CPU access.

If you have sections of 168 or 192 bytes, this must be something else, like another bank (scaler event, slow control event, ...).

The easiest for you is to check how this events got created using the bk_create() function.

Best,
Stefan
       Reply  26 Aug 2024, Adrian Fisher, Info, Help parsing scdms_v1 data? 

Stefan Ritt wrote:
The MIDAS event format is described here:

https://daq00.triumf.ca/MidasWiki/index.php/Event_Structure

All banks are aligned on a 8-byte boundary, so that one has effective 64-bit CPU access.

If you have sections of 168 or 192 bytes, this must be something else, like another bank (scaler event, slow control event, ...).

The easiest for you is to check how this events got created using the bk_create() function.

Best,
Stefan

Upon further investigation, the sections I'm looking at appear to be clusters of headers for empty banks.

Thank you!
    Reply  11 Sep 2024, Konstantin Olchanski, Info, Help parsing scdms_v1 data? 
Look at the C++ implementation of the MIDAS data file reader, the code is very 
simple to follow.

Depending on how old are your data files, you may run into a problem with 
misaligned 32-bit data banks. Latest MIDAS creates BANK32A events where all 
banks are aligned to 64 bits. old BANK32 format had banks alternating between 
aligned and misaligned. old 16-bit BANK format data hopefully you do not have.

If you successfully make a data format description file for MIDAS, please post 
it here for the next user.

K.O.



[quote="Adrian Fisher"]Hi! I'm working on creating a ksy file to help with 
parsing some data, but I'm having trouble finding some information. Right now, I 
have it set up very rudimentary - it grabs the event header and then uses the 
data bank size to grab the size of the data, but then I'm needing additional 
padding after the data bank to reach the next event.
However, there's some irregularity in the "padding" between data banks that I 
haven't been able to find any documentation for. For some reason, after the data 
banks, there's sections of data of either 168 or 192 bytes, and it's seemingly 
arbitrary which size is used. 
I'm just wondering if anyone has any information about this so that I'd be able 
to make some more progress in parsing the data.
The data I'm working with can be found at https://github.com/det-
lab/dataReaderWriter/blob/master/data/07180808_1735_F0001.mid.gz
And the ksy file that I've created so far is at https://github.com/det-
lab/dataReaderWriter/blob/master/kaitai/ksy/scdms_v1.ksy

There's also a block of data after the odb that runs for 384 bytes that I'm 
unsure the purpose of, if anyone could point me to some information about that.

Thank you![/quote]
Entry  30 Aug 2024, Marius Koeppel, Suggestion, Improve Event Documentation 
Hi,

I am writing a Rust based midas file reader however it was kind of hard to understand the full midas file 
structure from the documentation.

Only at the end of the page 
https://daq00.triumf.ca/MidasWiki/index.php/Event_Structure#MIDAS_Format_Event one finds under the 
headline “tape format” that there are special events which mark the start and the end of the run. It would 
be better to place this information more prominent maybe we a headline: “Special Events”. Maybe a link to 
this section at the top of the page could help. Also at the mlogger page there is no information about this.

Best,
Marius
    Reply  01 Sep 2024, Stefan Ritt, Suggestion, Improve Event Documentation 
> Hi,
> 
> I am writing a Rust based midas file reader however it was kind of hard to understand the full midas file 
> structure from the documentation.
> 
> Only at the end of the page 
> https://daq00.triumf.ca/MidasWiki/index.php/Event_Structure#MIDAS_Format_Event one finds under the 
> headline “tape format” that there are special events which mark the start and the end of the run. It would 
> be better to place this information more prominent maybe we a headline: “Special Events”. Maybe a link to 
> this section at the top of the page could help. Also at the mlogger page there is no information about this.
> 
> Best,
> Marius

Ben was so kind to update the event documentation:

  https://daq00.triumf.ca/MidasWiki/index.php/Event_Structure

Please have a look and let us know if that's better now.

Best,
Stefan
       Reply  01 Sep 2024, Marius Koeppel, Suggestion, Improve Event Documentation 
> > Hi,
> > 
> > I am writing a Rust based midas file reader however it was kind of hard to understand the full midas file 
> > structure from the documentation.
> > 
> > Only at the end of the page 
> > https://daq00.triumf.ca/MidasWiki/index.php/Event_Structure#MIDAS_Format_Event one finds under the 
> > headline “tape format” that there are special events which mark the start and the end of the run. It would 
> > be better to place this information more prominent maybe we a headline: “Special Events”. Maybe a link to 
> > this section at the top of the page could help. Also at the mlogger page there is no information about this.
> > 
> > Best,
> > Marius
> 
> Ben was so kind to update the event documentation:
> 
>   https://daq00.triumf.ca/MidasWiki/index.php/Event_Structure
> 
> Please have a look and let us know if that's better now.
> 
> Best,
> Stefan

Thank you Ben! Now its super clear!
    Reply  02 Sep 2024, Daniel Duque, Suggestion, Improve Event Documentation 
> I am writing a Rust based midas file reader

You might find this library I wrote useful: https://crates.io/crates/midasio

It should "just work", and if it doesn't, I would be interested to know.
       Reply  02 Sep 2024, Marius Koeppel, Suggestion, Improve Event Documentation 
> > I am writing a Rust based midas file reader
> 
> You might find this library I wrote useful: https://crates.io/crates/midasio
> 
> It should "just work", and if it doesn't, I would be interested to know.

Nice! I did not know about this. I have now also one simple reader but yours looks much more advanced. My 
overall idea here is to connect directly to midas so having some frontend features to analyze the data etc. do 
you also have already a library for this? I can also extend your stuff.

Best,
Marius
          Reply  02 Sep 2024, Daniel Duque, Suggestion, Improve Event Documentation 
> My overall idea here is to connect directly to midas so having some frontend features to analyze the data etc. do 
> you also have already a library for this? I can also extend your stuff.

No, sadly I don't have something like this yet. It has been on my "fun things to do at some point" list for too
long, but I haven't had the time.

If you start working on something like this, please keep me in the loop/link a repo here. I would be interested
on keeping an eye/contributing to something like this :)
    Reply  11 Sep 2024, Konstantin Olchanski, Suggestion, Improve Event Documentation 
> I am writing a Rust based midas file reader however it was kind of hard to understand the full midas file 
> structure from the documentation.

MIDAS is old-school, when the code was the documentation.

This is very noticeable when you try to document things MIDAS (as I have done many times).

For MIDAS data format, file level and bank level, best if you look at my midasio library (included with MIDAS 
git clone) and translate it to Rust directly. I think a Rust version of C++ midasio would be very welcome.

Many data fields in MIDAS files are mysterious and I reverse-engineered them the best I could.

The main problems were:
- data padding
- "length" fields include padding or not?
- identification of big-endian vs little-endian data
- probably something I forget

K.O.
Entry  04 Sep 2024, Lukas Gerritzen, Bug Report, Multiple issues with mhist 
Hi,

I am having some trouble with mhist. I suppose that the problems are at least partially due to our specific needs which might exceed what has been tested. For context, in MEG II we have some 10^4 history variables in ~30 different events.

1. mhist -l crashes. After displaying around 7000 lines, I get the following error message:
[mhist,ERROR] [midas.cxx:5949:bm_validate_client_index,ERROR] My client index 10 in buffer 'SYSMSG' 
is invalid: client name '', pid 0 should be my pid 3773321
[mhist,ERROR] [midas.cxx:5952:bm_validate_client_index,ERROR] Maybe this client was removed by a 
timeout. See midas.log. Cannot continue, aborting...
Aborted (core dumped)
Timing the execution shows around 33 seconds before the process is aborted.

I'm not sure if this would actually fix the problem, but while trying to circumvent the issue, I tried the
following:
mhist -e "Xenon" -l
This doesn't seem to be implemented. Listing only the variables of a single event would be nice
regardless of our specific issue.

2. mhist and history files.
We have a directory directory with about 2500 history files (mhf_...dat) for the past 1.5 years. Older
history files are archived in other directories with similar numbers of files. When trying to access them, I
encountered two issues:
It seems like it is not possible to pass a "history directory" as an argument. To dump the history for a full
year in the archive directory, I would need to run mhist many times with -f and then combine all the dumps.

If it really does not work, please consider this a feature request.
Also, even using single files does not work at the moment:
$ mhist -e "Xenon" -v "Det XeTmp 0-0" -t 100000 -s 200101 -p 250101 -f 
/data2/history/2022/mhf_1644698398_20220212_xenon.dat
ID 980316009, Aug 13 19:10:56, size 1851749486
This command was supposed to show me the rough time frame covered in this particular history file. I was
informed that the history files are in the new "FILE" format and mhist might not work with them properly.

tl;dr
  • Bug: mhist -l crashes
  • Bug: mhist -f does not work with "FILE" history format
  • Feature request: mhist -e "Name" -l to only show variables of event "Name"
  • Feature request: Set temporary history dir with a flag

Lukas
    Reply  11 Sep 2024, Konstantin Olchanski, Bug Report, Multiple issues with mhist 
I think I can offer some insight into your problems:

1) your mhist crash is due to the ODB timeout, it is probably set to 30 seconds in ODB /programs/mhist. you will 
have to make it biigger.

2) 1.5 years of files. yes. I have 10 years of files for ALPHA at CERN. and the number of files is a problem. 
But it should be better than the old system with 3 files per day (1000 files per year).

One solution you can try is symlinks. Assuming you have 10 years of history files in 10 per-year directory, you 
symlink as many of them as you need into the "current" directory, then remove the symlinks.

Why remove the symlinks? I use "ls" to read the list of history files and Unix/Linux does not have a syscall to 
"give me the 100 files with the newest mtime". I have to read the whole directory and that takes forever (if ZFS 
on HDD), it is quick with ZFS on SSD if ZFS cache is hot (you can have a cron job do "ls" every 5 minutes to 
keep the ZFS cache hot).

Now that I wrote the above, I think I see a way to make it "automatic", let me ponder this. (plus I always 
wanted to implement compressed history files (using "free" lz4)).

K.O.



I am having some trouble with mhist. I suppose that the problems are at least partially due to our specific 
needs which might exceed what has been tested. For context, in MEG II we have some 10^4 history variables in ~30 
different events. 

1. mhist -l crashes. After displaying around 7000 lines, I get the following error message:
[CODE]
[mhist,ERROR] [midas.cxx:5949:bm_validate_client_index,ERROR] My client index 10 in buffer 'SYSMSG' 
is invalid: client name '', pid 0 should be my pid 3773321
[mhist,ERROR] [midas.cxx:5952:bm_validate_client_index,ERROR] Maybe this client was removed by a 
timeout. See midas.log. Cannot continue, aborting...
Aborted (core dumped)
[/CODE]
Timing the execution shows around 33 seconds before the process is aborted.

I'm not sure if this would actually fix the problem, but while trying to circumvent the issue, I tried the 
following: [CODE]mhist -e "Xenon" -l[/CODE] This doesn't seem to be implemented. Listing only the variables of a 
single event would be nice 
regardless of our specific issue.

2. mhist and history files.
We have a directory directory with about 2500 history files (mhf_...dat) for the past 1.5 years. Older 
history files are archived in other directories with similar numbers of files. When trying to access them, I 
encountered two issues:
It seems like it is not possible to pass a "history directory" as an argument. To dump the history for a full 
year in the archive directory, I would need to run mhist many times with -f and then combine all the dumps.

If it really does not work, please consider this a feature request.
Also, even using single files does not work at the moment:
[CODE]
$ mhist -e "Xenon" -v "Det XeTmp 0-0" -t 100000 -s 200101 -p 250101 -f 
/data2/history/2022/mhf_1644698398_20220212_xenon.dat
ID 980316009, Aug 13 19:10:56, size 1851749486
[/CODE]
This command was supposed to show me the rough time frame covered in this particular history file. I was 
informed that the history files are in the new "FILE" format and mhist might not work with them properly.

tl;dr
[LIST]
[*] Bug: mhist -l crashes
[*] Bug: mhist -f does not work with "FILE" history format
[*] Feature request: mhist -e "Name" -l to only show variables of event "Name"
[*] Feature request: Set temporary history dir with a flag
[/LIST]

Lukas[/quote]
Entry  30 Apr 2024, Luigi Vigani, Bug Report, Params not initialized when starting sequencer midas_sequencer_ok.pngmidas_sequencer_buggy2.png
Good afternoon,

After updating Midas to the latest develop commit 
(0f5436d901a1dfaf6da2b94e2d87f870e3611cf1) we found out a bug when starting 
sequencer. If we have a simple loop from start value to stop value and step 
size, just printing the value at each iteration, we see everything good (see 
first attachment). Then we included another script though, which contains 
several subroutines we defined for our detector, and we try to run the same 
script. Unfortunately after this the parameters seem uninitialized, and the 
value at each loop does not make sense (see second attachment). Also, sometimes 
when pressing run the set parameter window would pop-up, but sometimes not.

The script is this one:

>>>
COMMENT Test script to check for a specific bug

INCLUDE global_basic_functions

#CALL setup_paths
#CALL generate_DUT_params

PARAM lv_start, "Start of LV", 1.8
PARAM lv_stop, "Stop of LV", 2.1
PARAM lv_step, "Step of LV", 0.02

n_iterations = (($lv_stop - $lv_start)/$lv_step)

MSG "Parameters:"
MSG $lv_start
MSG $lv_stop
MSG $lv_step
MSG $n_iterations

MSG "Start of looping"

LOOP n, $n_iterations
   lv_now = $lv_start + $n * $lv_step
   MSG $lv_now
   WAIT SECONDS, 1
ENDLOOP
<<<

and the only difference comes from commenting the line:

>>>
INCLUDE global_basic_functions
<<<

as global_basic_functions is defined as a LIBRARY and it includes 75 (!) 
subroutines...

Is it possible that when loading a large script it messes up the loading of 
parameters?

Thank you very much,
Regards,
Luigi.
    Reply  03 May 2024, Zaher Salman, Bug Report, Params not initialized when starting sequencer 
Could you please export and send me the /Sequencer ODB tree (or just /Sequencer/Param and /Sequencer/Variables) in both cases while the sequence is running. 

thanks,
Zaher


> Good afternoon,
> 
> After updating Midas to the latest develop commit 
> (0f5436d901a1dfaf6da2b94e2d87f870e3611cf1) we found out a bug when starting 
> sequencer. If we have a simple loop from start value to stop value and step 
> size, just printing the value at each iteration, we see everything good (see 
> first attachment). Then we included another script though, which contains 
> several subroutines we defined for our detector, and we try to run the same 
> script. Unfortunately after this the parameters seem uninitialized, and the 
> value at each loop does not make sense (see second attachment). Also, sometimes 
> when pressing run the set parameter window would pop-up, but sometimes not.
> 
> The script is this one:
> 
> >>>
> COMMENT Test script to check for a specific bug
> 
> INCLUDE global_basic_functions
> 
> #CALL setup_paths
> #CALL generate_DUT_params
> 
> PARAM lv_start, "Start of LV", 1.8
> PARAM lv_stop, "Stop of LV", 2.1
> PARAM lv_step, "Step of LV", 0.02
> 
> n_iterations = (($lv_stop - $lv_start)/$lv_step)
> 
> MSG "Parameters:"
> MSG $lv_start
> MSG $lv_stop
> MSG $lv_step
> MSG $n_iterations
> 
> MSG "Start of looping"
> 
> LOOP n, $n_iterations
>    lv_now = $lv_start + $n * $lv_step
>    MSG $lv_now
>    WAIT SECONDS, 1
> ENDLOOP
> <<<
> 
> and the only difference comes from commenting the line:
> 
> >>>
> INCLUDE global_basic_functions
> <<<
> 
> as global_basic_functions is defined as a LIBRARY and it includes 75 (!) 
> subroutines...
> 
> Is it possible that when loading a large script it messes up the loading of 
> parameters?
> 
> Thank you very much,
> Regards,
> Luigi.
       Reply  03 May 2024, Stefan Ritt, Bug Report, Params not initialized when starting sequencer param_test.mslfunctions.mslSequencer.jsonScreenshot_2024-05-03_at_09.19.29.pngScreenshot_2024-05-03_at_09.20.47.png
Ok, here is the complete code to reproduce the problem. Load parameter_test.msl which includes functions.msl. From the screenshot you see the variables containing 
garbage, and you also see that from the ODB screenshot. For completeness, I added Sequencer.json which contains the whole sequencer tree.

The interesting thing is that this works sometimes, and sometimes not. I'm not sure if this in the GUI or in the sequencer program, so we have to sort out who can 
fix it ;-)

Best,
Stefan
       Reply  03 May 2024, Luigi Vigani, Bug Report, Params not initialized when starting sequencer seq1.PNGseq2.PNGseq3.PNG
It is pretty much the same as Stefan, I attach here the screenshots. Also in my case it works sometimes, and sometimes partially (one or 2 params, like in 
attachment 3).

> Could you please export and send me the /Sequencer ODB tree (or just /Sequencer/Param and /Sequencer/Variables) in both cases while the sequence is running. 
> 
> thanks,
> Zaher
> 
> 
> > Good afternoon,
> > 
> > After updating Midas to the latest develop commit 
> > (0f5436d901a1dfaf6da2b94e2d87f870e3611cf1) we found out a bug when starting 
> > sequencer. If we have a simple loop from start value to stop value and step 
> > size, just printing the value at each iteration, we see everything good (see 
> > first attachment). Then we included another script though, which contains 
> > several subroutines we defined for our detector, and we try to run the same 
> > script. Unfortunately after this the parameters seem uninitialized, and the 
> > value at each loop does not make sense (see second attachment). Also, sometimes 
> > when pressing run the set parameter window would pop-up, but sometimes not.
> > 
> > The script is this one:
> > 
> > >>>
> > COMMENT Test script to check for a specific bug
> > 
> > INCLUDE global_basic_functions
> > 
> > #CALL setup_paths
> > #CALL generate_DUT_params
> > 
> > PARAM lv_start, "Start of LV", 1.8
> > PARAM lv_stop, "Stop of LV", 2.1
> > PARAM lv_step, "Step of LV", 0.02
> > 
> > n_iterations = (($lv_stop - $lv_start)/$lv_step)
> > 
> > MSG "Parameters:"
> > MSG $lv_start
> > MSG $lv_stop
> > MSG $lv_step
> > MSG $n_iterations
> > 
> > MSG "Start of looping"
> > 
> > LOOP n, $n_iterations
> >    lv_now = $lv_start + $n * $lv_step
> >    MSG $lv_now
> >    WAIT SECONDS, 1
> > ENDLOOP
> > <<<
> > 
> > and the only difference comes from commenting the line:
> > 
> > >>>
> > INCLUDE global_basic_functions
> > <<<
> > 
> > as global_basic_functions is defined as a LIBRARY and it includes 75 (!) 
> > subroutines...
> > 
> > Is it possible that when loading a large script it messes up the loading of 
> > parameters?
> > 
> > Thank you very much,
> > Regards,
> > Luigi.
          Reply  03 May 2024, Zaher Salman, Bug Report, Params not initialized when starting sequencer 
I have been able to reproduce the problem only once. From what I see, it seems that the Variables ODB tree is not initialized properly from the Param tree. Below are the messages from the failed run compared to a successful one. As far as I could see, the javascript code does not change anything in the Variables ODB tree (only monitors it). The actual changes are done by the sequencer program, or am I wrong?

Failed run:
16:14:25.849 2024/05/03 [Sequencer,INFO]  + 3 * 
16:14:24.722 2024/05/03 [Sequencer,INFO]  + 2 * 
16:14:23.594 2024/05/03 [Sequencer,INFO]  + 1 * 
16:14:23.592 2024/05/03 [Sequencer,INFO] Start of looping
16:14:23.591 2024/05/03 [Sequencer,INFO] (( - )/)
16:14:23.591 2024/05/03 [Sequencer,INFO] 
16:14:23.590 2024/05/03 [Sequencer,INFO] 
16:14:23.590 2024/05/03 [Sequencer,INFO] 
16:14:23.589 2024/05/03 [Sequencer,INFO] Parameters:
16:14:23.562 2024/05/03 [Sequencer,TALK] Sequencer started with script "testpars.msl".


Successful run:
16:15:37.472 2024/05/03 [Sequencer,INFO] 1.820000
16:15:37.471 2024/05/03 [Sequencer,INFO] Start of looping
16:15:37.471 2024/05/03 [Sequencer,INFO] 15
16:15:37.470 2024/05/03 [Sequencer,INFO] 0.020000
16:15:37.470 2024/05/03 [Sequencer,INFO] 2.100000
16:15:37.469 2024/05/03 [Sequencer,INFO] 1.800000
16:15:37.469 2024/05/03 [Sequencer,INFO] Parameters:
16:15:37.450 2024/05/03 [Sequencer,TALK] Sequencer started with script "testpars.msl".
             Reply  03 May 2024, Stefan Ritt, Bug Report, Params not initialized when starting sequencer 
Ahh, that rings a bell:

1) JS opens start dialog box
2) User enters parameters and presses start
3) JS writes parameters
4) JS starts sequencer
5) Sequencer copies parameters to variables

Now how do you handle 3) and 4). Just issue two mjsonrpc commands together? What then could happen is that 4) is executed before 3) and we get the garbage.
You have to do 3) and WAIT for the return ("then" in the JS promise), and only then issue 4) from there.

Stefan
                Reply  03 May 2024, Zaher Salman, Bug Report, Params not initialized when starting sequencer 
Thanks for the hint Stefan. I pushed a possible fix but I cannot test it since I cannot reproduce the issue.

> Ahh, that rings a bell:
> 
> 1) JS opens start dialog box
> 2) User enters parameters and presses start
> 3) JS writes parameters
> 4) JS starts sequencer
> 5) Sequencer copies parameters to variables
> 
> Now how do you handle 3) and 4). Just issue two mjsonrpc commands together? What then could happen is that 4) is executed before 3) and we get the garbage.
> You have to do 3) and WAIT for the return ("then" in the JS promise), and only then issue 4) from there.
> 
> Stefan
                   Reply  03 May 2024, Stefan Ritt, Bug Report, Params not initialized when starting sequencer Screenshot_2024-05-03_at_18.19.52.png
Seems to me like the problem happens less frequently, but I still see it (1 out of 5 or so). The fact that /Sequencer/Params/Value is empty tells me that the GUI 
has the problem and not the sequencer side.

Stefan
                      Reply  10 May 2024, Zaher Salman, Bug Report, Params not initialized when starting sequencer 
I think that I finally managed to fix the problem. The default values of the parameters are now written first in one go, then the sequencer waits for confirmation that everything is completed before proceeding. Please test and let me know if there are still any issues.

Zaher
                         Reply  13 May 2024, Luigi Vigani, Bug Report, Params not initialized when starting sequencer 

Zaher Salman wrote:
I think that I finally managed to fix the problem. The default values of the parameters are now written first in one go, then the sequencer waits for confirmation that everything is completed before proceeding. Please test and let me know if there are still any issues.

Zaher


Hi Zaher,

It seems fixed to me as well! Thanks a lot!

Luigi.
                            Reply  21 May 2024, Thomas Senger, Bug Report, Params not initialized when starting sequencer 
Hi all,
On develop, the issue seems to be still there and is not fixed.
The parameters are currently "never" correctly initialized, only as "empty". Tried several times.
Thomas
                               Reply  21 May 2024, Zaher Salman, Bug Report, Params not initialized when starting sequencer 
I traced the problem to a mjsonrpc_db_ls call where I read /Sequencer/Param... . It seems that this sometimes returns a status 312 (DB_NO_KEY) although I am sure all keys are there in the ODB.
I am still trying to solve this but I may need some help on the mjsonrpc.cxx code.

Zaher


Thomas Senger wrote:
Hi all,
On develop, the issue seems to be still there and is not fixed.
The parameters are currently "never" correctly initialized, only as "empty". Tried several times.
Thomas
                                  Reply  21 May 2024, Zaher Salman, Bug Report, Params not initialized when starting sequencer 
Hi Thomas,
I have a fix for the issue and I would be happy with testers if you are willing. Simply "git checkout newfeature_ZS" and give it a go. No need to recompile anything.

A change in /Sequencer/Param triggers a save of the values which is then used to produce the parameter dialog. This allows us to bypass the slow response in mjsonrpc calls just before the dialog.

Zaher


Thomas Senger wrote:
Hi all,
On develop, the issue seems to be still there and is not fixed.
The parameters are currently "never" correctly initialized, only as "empty". Tried several times.
Thomas
                                     Reply  22 May 2024, Thomas Senger, Bug Report, Params not initialized when starting sequencer 
Hi Zaher,
thanks for your help.
I just tried the bug fix, but it still seems not to work properly.
It seems that if the script is short, it will work, but if many SUBROUTINES are integrated, it does not work and the parameter are initialized empty.
Best regards,
Thomas
                                        Reply  30 Aug 2024, Zaher Salman, Bug Report, Params not initialized when starting sequencer 
The issue with the parameters should be fixed now. Please test and let me know if it still happens.


Thomas Senger wrote:
Hi Zaher,
thanks for your help.
I just tried the bug fix, but it still seems not to work properly.
It seems that if the script is short, it will work, but if many SUBROUTINES are integrated, it does not work and the parameter are initialized empty.
Best regards,
Thomas
                                           Reply  04 Sep 2024, Lukas Gerritzen, Bug Report, Params not initialized when starting sequencer 
I think I have had similar issues in a custom page, where I wrote values to the ODB and they were not ready when I needed them. If you found a fix to such race conditions, could you maybe share how to properly treat this issue? If the solution reliably works, we could also consider including it in the documentation (midaswiki or example.html).


Zaher Salman wrote:
The issue with the parameters should be fixed now. Please test and let me know if it still happens.
                                              Reply  04 Sep 2024, Zaher Salman, Bug Report, Params not initialized when starting sequencer 
The problem here was that the JS code did not wait to msequencer to finish preparing the "/Sequencer/Param" in the ODB, so I had to change to code to wait for "/Sequencer/Command/Load new file" to be false before proceeding.

As for your problem I recommend that you handle in the following way:

mjsonrpc_db_paste(paths,values).then(function (rpc) {
if (rpc.result.status.every(status => status === 1) {
// do something
} else {
// failed to set values, do something else
}
}).catch(function (error) {
console.error(error);
});

alternatively (for a single ODB) you can use the checkODBValue() function in sequencer.js. This function monitors a specific ODB path until it reaches a specific value and then calls funcCall with args.

var NcheckValue = 0;
// What for ODB in path to have value
// If value is not reached, give up after 10s
function checkODBValue(path,value,funcCall,args) {
/* Arguments:
path - ODB path to monitor for value
value - the value to be reached and return success
funcCall - function name to call when value is reached
args - argument to pass to funcCall
*/
// Call the mjsonrpc_db_get_values function
mjsonrpc_db_get_values([path]).then(function(rpc) {
if (rpc.result.status[0] === 1 && rpc.result.data[0] !== value) {
console.log("Value not reached yet", NcheckValue);
NcheckValue++;
if (NcheckValue < 100) {
// Wait 0.1 second and then call checkODBValue again
// Time out after 10 s
setTimeout(() => {
checkODBValue(path,value,funcCall,args);
}, 100);
}
} else {
if (funcCall) funcCall(args);
console.log("Value reached, proceeding...");
// reset counter
NcheckValue = 0;
}
}).catch(function(error) {
console.error(error);
});
}



Lukas Gerritzen wrote:
I think I have had similar issues in a custom page, where I wrote values to the ODB and they were not ready when I needed them. If you found a fix to such race conditions, could you maybe share how to properly treat this issue? If the solution reliably works, we could also consider including it in the documentation (midaswiki or example.html).


Zaher Salman wrote:
The issue with the parameters should be fixed now. Please test and let me know if it still happens.
Entry  19 Aug 2024, Konstantin Olchanski, Release, kernel-module-universe updated to -KO7 
The linux kernel driver for the Universe-II VME to PCI bridge is updated to 
version -KO7. It now builds and runs with Debian-12 stock kernel 6.1.0-22-686.

I pxe boot (isolinux/pxelinux) the linux kernel and NFS-mount the stock 32-bit 
Debian-12 userland. Userland tarball is available by request. PXE and NFS-Root 
configuration is written up on the wiki at daq.triumf.ca, example config files 
are available on request.

https://daq00.triumf.ca/DaqWiki/index.php/Ubuntu#setup_diskless_network_booting
https://daq00.triumf.ca/DaqWiki/index.php/VME-CPU

The Debian-11 kernel also works (use -KO6 driver is -KO7 bombs), but Debian-11 
kernel with Debian-12 userland and Ubuntu-22 NFS server fails with "file too 
big" errors, the best I can tell this has to do with old 32-bit kernels getting 
unhappy about 64-bit NFS inode numbers.

Cross-compilation from 64-bit Ubuntu-22 to 32-bit VME processors running 32-bit 
Debian-12 is written up here:
https://daq00.triumf.ca/DaqWiki/index.php/Ubuntu#32-bit_intel_cross-compiler

To cross-build 32-bit MIDAS for 32-bit VME processor use "make linux32" or build 
natively (pretty slow on 1 GHz Pentium-III).

K.O.
    Reply  19 Aug 2024, Konstantin Olchanski, Release, kernel-module-universe updated to -KO7 
> The linux kernel driver for the Universe-II VME to PCI bridge is updated to 
> version -KO7. It now builds and runs with Debian-12 stock kernel 6.1.0-22-686.

I have a report that this driver might work on 64-bit VME CPUs (minus a bug in the 
MIDAS VME library). I do not have such hardware, cannot test, cannot confirm. (All our 
64-bit VME CPUs have the tsi148 bridge and run Ubuntu kernels and userland).

https://daq00.triumf.ca/elog-midas/Midas/2566

K.O.
       Reply  19 Aug 2024, Konstantin Olchanski, Release, kernel-module-universe updated to -KO7 
> > The linux kernel driver for the Universe-II VME to PCI bridge is updated to 
> > version -KO7. It now builds and runs with Debian-12 stock kernel 6.1.0-22-686.

Ahem, and the location is:

git clone https://daq00.triumf.ca/~olchansk/git/kernel-module-universe.git

K.O.
Entry  07 Aug 2024, Lukas Gerritzen, Bug Report, File name bug in csv export 
When I export data from a history plot, I get nonsensical filenames. For example, for data from today, I got "Xenon-Vacuum-20247107-152815-20247107-160032.csv".
The month shouldn't be 71 but rather 08. The problem is that in the code it's generated as
("0" + leftDate.getUTCMonth() + 1).slice(-2)
The first '+' is a string concatenation, and so is the second. It should be an addition though. A possible fix is to add parentheses around the addition:
("0" + (leftDate.getUTCMonth() + 1)).slice(-2)
    Reply  07 Aug 2024, Stefan Ritt, Bug Report, File name bug in csv export 
Thanks. Fixed. Committed. Pulled on megon02.

Stefan
       Reply  07 Aug 2024, Lukas Gerritzen, Bug Report, File name bug in csv export 
Thanks. I think, mplot.js:1844 should be changed as well, but I haven't tried it with mplot.
          Reply  07 Aug 2024, Stefan Ritt, Bug Report, File name bug in csv export 
Fixed as well.
Entry  04 Jul 2024, Pavel Murat, Suggestion, cmake-installing more files ? midas-spack.patch
Dear all, 

this posting results from the Fermilab move to a new packaging/build system called spack 
which doesn't allow to use the MIDAS install procedure described at

https://daq00.triumf.ca/MidasWiki/index.php/Quickstart_Linux#MIDAS_Package_Installation

as is. Spack specifics aside, building MIDAS under spack took  
a) adding cmake install for three directories: drivers, resources, and python/midas, 
b) adding one more include file - include/tinyexpr.h - to the list of includes installed by cmake.

With those changes I was able to point MIDASSYS to the spack install area and successfully run mhttpd, 
build experiment-specific C++ frontends and drivers, use experiment-specific python frontends etc. 
I'm not using anything from MIDAS submodules though.

I'm wondering what the experts would think about accepting the changes above to the main tree. 

Installation procedures and changed to cmake files are always a sensitive area with a lot of boundary 
constraints coming from the existing use patterns, and even a minor change could have unexpected consequences
So I wouldn't be surprised if the fairly minor changes outlined above had side effects.

The patch file is attached for consideration.

-- regards, Pasha
    Reply  06 Aug 2024, Stefan Ritt, Suggestion, cmake-installing more files ? 
I don't see any bad side effects at the moment, so I accepted the changes and committed them.

Stefan
Entry  31 Jul 2024, Lukas Gerritzen, Bug Report, New history plots: Zooming in on logarithmic y axis does not work as expected 
Using the mouse to click and drag on a logarithmic y axis triggers a zooming behaviour as if the user zoomed in on a linear axis. 

How to reproduce:
Take a plot that ranges from 1e-20 to 100, for example. Click around the middle of the axis and drag the mouse up to about 3/4.

Expected result:
Limit the y axis to the approximate range 1e-10 to 1e-4

Actual result:
The y axis limits are around 50 and 75.


P.S. Is there a way to configure the history plot in a way that values of 0.00 are ignored rather than showing up as 1e-20?
    Reply  31 Jul 2024, Stefan Ritt, Bug Report, New history plots: Zooming in on logarithmic y axis does not work as expected 
I fixed that and committed the change to megon02, just reload your browser. I also set ymin and ymax of the Vacuum plot to meaningful 
values (not to zero!).

Stefan
Entry  03 Jul 2024, Tam Kai Chung, Bug Report, Fail to build in the examples/experiment 
Dear experts,
I am a new user of MIDAS. I try to follow the instruction from 
https://daq00.triumf.ca/MidasWiki/index.php/Quickstart_Linux
to install MIDAS in Fedora 39.

When I try to have a try in the section of "Clients run on Localhost only"
https://daq00.triumf.ca/MidasWiki/index.php/Quickstart_Linux#Clients_run_on_Localhost_only

I get the error of "undefined reference to" several variables in the mfe.cxx. For example the variable "max_event_size_frag". May I know any idea about this issue? Thank you.


Best,
Terry
    Reply  04 Jul 2024, Nick Hastings, Bug Report, Fail to build in the examples/experiment 
I think this may only be an issue on the development branch.
Can you confirm that that is what you are using?

If so, I suggest you try the most recent stable tag 2022-05-c.

> Dear experts,
> I am a new user of MIDAS. I try to follow the instruction from 
> https://daq00.triumf.ca/MidasWiki/index.php/Quickstart_Linux
> to install MIDAS in Fedora 39.
> 
> When I try to have a try in the section of "Clients run on Localhost only"
> https://daq00.triumf.ca/MidasWiki/index.php/Quickstart_Linux#Clients_run_on_Localhost_only
> 
> I get the error of "undefined reference to" several variables in the mfe.cxx. For example the variable "max_event_size_frag". May I know any idea about this issue? Thank you.
> 
> 
> Best,
> Terry
       Reply  05 Jul 2024, Tam Kai Chung, Bug Report, Fail to build in the examples/experiment 
Hello Nick,
I am using the most updated tag: midas-2022-05-c-1284-g4a77127b.

Here would be some of the examples of the error listed:
/usr/bin/ld: /packages/midas/lib/mfe.o: in function
/usr/bin/ld: /packages/midas/src/mfe.cxx:2537: undefined reference to `event_buffer_size'
...
Also several undefined reference. Any idea about it? Thank you.

Best,
Terry


> I think this may only be an issue on the development branch.
> Can you confirm that that is what you are using?
> 
> If so, I suggest you try the most recent stable tag 2022-05-c.
> 
> > Dear experts,
> > I am a new user of MIDAS. I try to follow the instruction from 
> > https://daq00.triumf.ca/MidasWiki/index.php/Quickstart_Linux
> > to install MIDAS in Fedora 39.
> > 
> > When I try to have a try in the section of "Clients run on Localhost only"
> > https://daq00.triumf.ca/MidasWiki/index.php/Quickstart_Linux#Clients_run_on_Localhost_only
> > 
> > I get the error of "undefined reference to" several variables in the mfe.cxx. For example the variable "max_event_size_frag". May I know any idea about this issue? Thank you.
> > 
> > 
> > Best,
> > Terry
Entry  04 Apr 2024, Konstantin Olchanski, Info, MIDAS RPC data format 
I am not sure I have seen this documented before. MIDAS RPC data format.

1) RPC request (from client to mserver), in rpc_call_encode()

1.1) header:

4 bytes NET_COMMAND.header.routine_id is the RPC routine ID
4 bytes NET_COMMAND.header.param_size is the size of following data, aligned to 8 bytes

1.2) followed by values of RPC_IN parameters:

arg_size is the actual data size
param_size = ALIGN8(arg_size)

for TID_STRING||TID_LINK, arg_size = 1+strlen()
for TID_STRUCT||RPC_FIXARRAY, arg_size is taken from RPC_LIST.param[i].n
for RPC_VARARRAY|RPC_OUT, arg_size is pointed to by the next argument
for RPC_VARARRAY, arg_size is the value of the next argument
otherwise arg_size = rpc_tid_size()

data encoding:

RPC_VARARRAY:
4 bytes of ALIGN8(arg_size)
4 bytes of padding
param_size bytes of data

TID_STRING||TID_LINK:
param_size of string data, zero terminated

otherwise:
param_size of data

2) RPC dispatch in rpc_execute

for each parameter, a pointer is placed into prpc_param[i]:

RPC_IN: points to the data inside the receive buffer
RPC_OUT: points to the data buffer allocated inside the send buffer
RPC_IN|RPC_OUT: data is copied from the receive buffer to the send buffer, prpc_param[i] is a pointer to the copy in the send buffer

prpc_param[] is passed to the user handler function.

user function reads RPC_IN parameters by using the CSTRING(i), etc macros to dereference prpc_param[i]

user function modifies RPC_IN|RPC_OUT parameters pointed to by prpc_param[i] (directly in the send buffer)

user function places RPC_OUT data directly to the send buffer pointed to by prpc_param[i]

size of RPC_VARARRAY|RPC_OUT data should be written into the next/following parameter.

3) RPC reply

3.1) header:

4 bytes NET_COMMAND.header.routine_id contains the value returned by the user function (RPC_SUCCESS)
4 bytes NET_COMMAND.header.param_size is the size of following data aligned to 8 bytes

3.2) followed by data for RPC_OUT parameters:

data sizes and encodings are the same as for RPC_IN parameters.

for variable-size RPC_OUT parameters, space is allocated in the send buffer according to the maximum data size
that the user code expects to receive:

RPC_VARARRAY||TID_STRING: max_size is taken from the first 4 bytes of the *next* parameter
otherwise: max_size is same as arg_size and param_size.

when encoding and sending RPC_VARARRAY data, actual data size is taken from the next parameter, which is expected to be 
TID_INT32|RPC_IN|RPC_OUT.

4) Notes:

4.1) RPC_VARARRAY should always be sent using two parameters:

a) RPC_VARARRAY|RPC_IN is pointer to the data we are sending, next parameter must be TID_INT32|RPC_IN is data size
b) RPC_VARARRAY|RPC_OUT is pointer to the data buffer for received data, next parameter must be TID_INT32|RPC_IN|RPC_OUT before the call should 
contain maximum data size we expect to receive (size of malloc() buffer), after the call it may contain the actual data size returned
c) RPC_VARARRAY|RPC_IN|RPC_OUT is pointer to the data we are sending, next parameter must be TID_INT32|RPC_IN|RPC_OUT containing the maximum 
data size we are expected to receive.

4.2) during dispatching, RPC_VARARRAY|RPC_OUT and TID_STRING|RPC_OUT both have 8 bytes of special header preceeding the actual data, 4 bytes of 
maximum data size and 4 bytes of padding. prpc_param[] points to the actual data and user does not see this special header.

4.3) when encoding outgoing data, this special 8 byte header is removed from TID_STRING|RPC_OUT parameters using memmove().

4.4) TID_STRING parameters:

TID_STRING|RPC_IN can be sent using oe parameter
TID_STRING|RPC_OUT must be sent using two parameters, second parameter should be TID_INT32|RPC_IN to specify maximum returned string length
TID_STRING|RPC_IN|RPC_OUT ditto, but not used anywhere inside MIDAS

4.5) TID_IN32|RPC_VARARRAY does not work, corrupts following parameters. MIDAS only uses TID_ARRAY|RPC_VARARRAY

4.6) TID_STRING|RPC_IN|RPC_OUT does not seem to work.

4.7) RPC_VARARRAY does not work is there is preceding TID_STRING|RPC_OUT that returned a short string. memmove() moves stuff in the send buffer, 
this makes prpc_param[] pointers into the send buffer invalid. subsequent RPC_VARARRAY parameter refers to now-invalid prpc_param[i] pointer to 
get param_size and gets the wrong value. MIDAS does not use this sequence of RPC parameters.

4.8) same bug is in the processing of TID_STRING|RPC_OUT parameters, where it refers to invalid prpc_param[i] to get the string length.

K.O.
    Reply  24 Apr 2024, Konstantin Olchanski, Info, MIDAS RPC data format 
> 4.5) TID_IN32|RPC_VARARRAY does not work, corrupts following parameters. MIDAS only uses TID_ARRAY|RPC_VARARRAY

fixed in commit 0f5436d901a1dfaf6da2b94e2d87f870e3611cf1, TID_ARRAY|RPC_VARARRAY was okey (i.e. db_get_value()), bug happened only if rpc_tid_size() 
is not zero.

> 
> 4.6) TID_STRING|RPC_IN|RPC_OUT does not seem to work.
> 
> 4.7) RPC_VARARRAY does not work is there is preceding TID_STRING|RPC_OUT that returned a short string. memmove() moves stuff in the send buffer, 
> this makes prpc_param[] pointers into the send buffer invalid. subsequent RPC_VARARRAY parameter refers to now-invalid prpc_param[i] pointer to 
> get param_size and gets the wrong value. MIDAS does not use this sequence of RPC parameters.
> 
> 4.8) same bug is in the processing of TID_STRING|RPC_OUT parameters, where it refers to invalid prpc_param[i] to get the string length.

fixed in commits e45de5a8fa81c75e826a6a940f053c0794c962f5 and dc08fe8425c7d7bfea32540592b2c3aec5bead9f

K.O.
    Reply  02 Jun 2024, Konstantin Olchanski, Info, MIDAS RPC data format 
> MIDAS RPC data format.
> 3) RPC reply
> 3.1) header:
> 3.2) followed by data for RPC_OUT parameters:
> 
> data sizes and encodings are the same as for RPC_IN parameters.

Correction:

RPC_VARARRAY data encoding for data returned by RPC is different from data sent to RPC:

4 bytes of arg_size (before 8-byte alignement), (for data sent to RPC, it's 4 bytes of param_size, after 8-byte alignment)
4 bytes of padding
param_size of data

K.O.

P.S. bug/discrepancy caught by GCC/LLVM address sanitizer.
Entry  24 Apr 2024, Konstantin Olchanski, Info, MIDAS RPC add support for std::string and std::vector<char> 
I now fully understand the MIDAS RPC code, had to add some debugging printfs, 
write some test code (odbedit test_rpc), catch and fix a few bugs.

Fixes for the bugs are now committed.

Small refactor of rpc_execute() should be committed soon, this removes the 
"goto" in the memory allocation of output buffer. Stefan's original code used a 
fixed size buffer, I later added allocation "as-neeed" but did not fully 
understand everything and implemented it as "if buffer too small, make it 
bigger, goto start over again".

After that, I can implement support for std::string and std::vector<char>.

The way it looks right now, the on-the-wire data format is flexible enough to 
make this change backward-compatible and allow MIDAS programs built with old 
MIDAS to continue connecting to the new MIDAS and vice-versa.

MIDAS RPC support for std::string should let us improve security by removing 
even more uses of fixed-size string buffers.

Support for std::vector<char> will allow removal of last places where 
MAX_EVENT_SIZE is used and simplify memory allocation in other "give me data" 
RPC calls, like RPC_JRPC and RPC_BRPC.

K.O.
    Reply  29 May 2024, Konstantin Olchanski, Info, MIDAS RPC add support for std::string and std::vector<char> 
This is moving slowly. I now have RPC caller side support for std::string and 
std::vector<char>. RPC server side is next. K.O.
Entry  24 May 2024, Konstantin Olchanski, Info, added ubuntu 22 arm64 cross-compilation 
Ubuntu 22 has almost everything necessary to cross-build arm64 MIDAS frontends:

# apt install g++-12-aarch64-linux-gnu gcc-12-aarch64-linux-gnu-base libstdc++-12-dev-arm64-cross
$ aarch64-linux-gnu-gcc-12 -o ttcp.aarch64 ttcp.c -static

to cross-build MIDAS:

make arm64_remoteonly -j

run programs from $MIDASSYS/linux-arm64-remoteonly/bin
link frontends to libraries in $MIDASSYS/linux-arm64-remoteonly/lib

Ubuntu 22 do not provide an arm64 libz.a, as a workaround, I build a fake one. (we do not have HAVE_ZLIB anymore...). or you 
can link to libz.a from your arm64 linux image, assuming include/zlib.h are compatible.

K.O.
Goto page Previous  1, 2, 3 ... 45, 46, 47   Next  
ELOG V3.1.4-2e1708b5