Back Midas Rome Roody Rootana
  Midas DAQ System, Page 129 of 154  Not logged in ELOG logo
ID Date Author Topic Subjectdown
  435   18 Feb 2008 Exaos LeeBug ReportGreat! But I failed to run it. :(
I encountered the error message as the following:
Traceback (most recent call last):
  File "runtest.py", line 42, in <module>
    import midas
  File "/opt/MIDAS.PSI/Resources/PyMIDAS/pymidas/midas/__init__.py", line 140, in
<module>
    cmidas = ctypes.cdll.LoadLibrary('libmidas.so')
  File "/usr/lib/python2.5/site-packages/PIL/__init__.py", line 431, in LoadLibrary
    
  File "/usr/lib/python2.5/site-packages/PIL/__init__.py", line 348, in __init__
    
OSError: /opt/MIDAS.PSI/Versions/Current/lib/libmidas.so: undefined symbol:
cam16i_rq

Compiling the MIDAS library using NEED_SHLIB=1 causes the same "undefined
reference" error. But it can be fixed by adding "-shared" to CFLAGS in the
Makefile. Though the libmidas.so can be successfully created, the above error is
still there. Can anybody help me?

Environment:
Platform: Ubuntu Linux 7.10 with gcc 4.1
MIDAS version: 2.0.0, svn-4106
Python version: 2.5.1
  1574   27 Jun 2019 HassanBug ReportGetting an error when trying to compile a frontend file
When we run the following commands on the hostname(DAQ machine) and the remote
frontend(Rpi):
cd $HOME/online
cp $MIDASSYS/examples/experiment/* .
make

We get errors such as
=================
On Rpi:
pi@raspberrypi:~/online/fe_test $ make
...
Missing definition of environment variable 'ROOTSYS' !

=================
On host machine
inking CXX executable frontend
/usr/bin/ld: cannot find -lmfe
/usr/bin/ld: cannot find -lmidas
collect2: error: ld returned 1 exit status
make[2]: *** [frontend] Error 1
make[1]: *** [CMakeFiles/frontend.dir/all] Error 2
make: *** [all] Error 2


The Rpi(32bit) doesn't have root installed but the host machine(64bit) does.
What can we do to fix this?

Thank you this forum has been of great help.
  1576   27 Jun 2019 Konstantin OlchanskiBug ReportGetting an error when trying to compile a frontend file
If the latest midas does not work, try the previous release versions. "git tags" and "git branch -
a" will show you what exists. Look for branch and tag names in the form "midas-YYYY-MM".

As shortcut, the latest release candidate is midas-2019-06, the latest release branch is midas-
2019-03, latest release tag midas-2019-03-h.

Read the messages in this thread for more information:
https://midas.triumf.ca/elog/Midas/1513

>
> When we run the following commands ...
> make[1]: *** [CMakeFiles/frontend.dir/all] Error 2
>

I do not understand cmake well enough to debug this. Falling back to midas-2019-03 may help 
you as it uses normal make and with luck you know how to debug normal Makefiles if you see 
the same problem.

K.O.
  1579   27 Jun 2019 Stefan RittBug ReportGetting an error when trying to compile a frontend file
Note that the example experiment compiles a simple example frontend and a root-based analyzer. If you don't have 
ROOT installed, you of course cannot compile the analyzer. If you don't need the analyzer, remove it from the 
Makefile/CMakeLists.txt

It's not clear to me why the frontend did not compile on our server machine. You did not post the command how you 
initiated the build. Note that there are now two parallel build schemes: the traditional Makefile and the new 
CMakeFiles.txt. We try to maintain both of them, so you have to specify which one you use when you get an error.

I realize now that the CMakeLists.txt in the experiment example directory builds nicely under midas, but when you move 
it to another directory and extract it from the normal build scheme it breaks. I rewrote the CMakeLists.txt now that it 
looks for MIDASSYS and also build at different locations. Do

cd $HOME/online
cp $MIDASSYS/examples/experiment/* .
mkdir build
cd build
cmake ..
make

and it should work. Of course first pull the current develop version.

Stefan
  3074   17 Sep 2025 Mark GrimesSuggestionGet manalyzer to configure midas::odb when running offline
Hi,
Lots of users like the midas::odb interface for reading from the ODB in manalyzers. It currently doesn't 
work offline however without a few manual lines to tell midas::odb to read from the ODB copy in the run 
header. The code also gets a bit messy to work out the current filename and get midas::odb to reopen the 
file currently being processed. This would be much cleaner if manalyzer set this up automatically, and then 
user code could be written that is completely ignorant of whether it is running online or offline.

The change I suggest is in the `set_offline_odb` branch, commit 4ffbda6, which is simply:

diff --git a/manalyzer.cxx b/manalyzer.cxx
index 371f135..725e1d2 100644
--- a/manalyzer.cxx
+++ b/manalyzer.cxx
@@ -15,6 +15,7 @@
 
 #include "manalyzer.h"
 #include "midasio.h"
+#include "odbxx.h"
 
 //////////////////////////////////////////////////////////
 
@@ -2075,6 +2076,8 @@ static int ProcessMidasFiles(const std::vector<std::string>& files, const std::v
                if (!run.fRunInfo) {
                   run.CreateRun(runno, filename.c_str());
                   run.fRunInfo->fOdb = MakeFileDumpOdb(event->GetEventData(), event->data_size);
+                  // Also set the source for midas::odb in case people prefer that interface
+                  midas::odb::set_odb_source(midas::odb::STRING, std::string(event->GetEventData(), event-
>data_size));
                   run.BeginRun();
                }


It happens at the point where the ODB record is already available and requires no effort from the user to 
be able to read the ODB offline.

Thanks,

Mark.
  3075   17 Sep 2025 Konstantin OlchanskiSuggestionGet manalyzer to configure midas::odb when running offline
> Lots of users like the midas::odb interface for reading from the ODB in manalyzers.
> +#include "odbxx.h"

This is a useful improvement. Before commit of this patch, can you confirm the RunInfo destructor
deletes this ODB stuff from odbxx? manalyzer takes object life times very seriously.

There is also the issue that two different RunInfo objects would load two different ODB dumps
into odbxx. (inability to access more than 1 ODB dump is a design feature of odbxx).

This is not an actual problem in manalyzer because it only processes one run at a time
and only 1 or 0 RunInfo objects exists at any given time.

Of course with this patch extending manalyzer to process two or more runs at the same time becomes impossible.

K.O.
  3077   18 Sep 2025 Mark GrimesSuggestionGet manalyzer to configure midas::odb when running offline
> ....Before commit of this patch, can you confirm the RunInfo destructor
> deletes this ODB stuff from odbxx? manalyzer takes object life times very seriously.

The call stores the ODB string in static members of the midas::odb class. So these will have a lifetime of the process or until they're replaced by another 
call. When a midas::odb is instantiated it reads from these static members and then that data has the lifetime of that instance.

> Of course with this patch extending manalyzer to process two or more runs at the same time becomes impossible.

Yes, I hadn't realised that was an option. For that to work I guess the aforementioned static members could be made thread local storage, and 
processing of each run kept to a specific thread. Although I could imagine user code making assumptions and breaking, like storing a midas::odb as a 
class member or something.

Note that I missed doing the same for the end of run event, which should probably also be added.

Thanks,

Mark.
  3078   18 Sep 2025 Stefan RittSuggestionGet manalyzer to configure midas::odb when running offline
> > Of course with this patch extending manalyzer to process two or more runs at the same time becomes impossible.
> 
> Yes, I hadn't realised that was an option. For that to work I guess the aforementioned static members could be made thread local storage, and 
> processing of each run kept to a specific thread. Although I could imagine user code making assumptions and breaking, like storing a midas::odb as a 
> class member or something.

If we want to analyze several runs, I can easily add code to make this possible. In a new call to set_odb_source(), the previously allocated memory in that function can be freed. We can aldo make the memory handling 
thread-specific, allowing several thread to analyze different runs at the same time. But I will only invest work there once it's really needed by someone.

Stefan
  3081   22 Sep 2025 Stefan RittSuggestionGet manalyzer to configure midas::odb when running offline
I will work today on the odbxx API to make sure there are no memory leaks when you switch form one file to another. I talked to KO so he agreed that yo then commit your proposed change of manalyzer 

Best,
Stefan
  3082   22 Sep 2025 Konstantin OlchanskiSuggestionGet manalyzer to configure midas::odb when running offline
> I will work today on the odbxx API to make sure there are no memory leaks when you switch form one file to another. I talked to KO so he agreed that yo then commit your proposed change of manalyzer 

That, and add a "clear()" method that resets odbxx state to "empty". I will call odbxx.clear() everywhere where I call "delete fOdb;" (TARunInfo::dtor and other places).

K.O.
  3083   22 Sep 2025 Konstantin OlchanskiSuggestionGet manalyzer to configure midas::odb when running offline
> > ....Before commit of this patch, can you confirm the RunInfo destructor
> > deletes this ODB stuff from odbxx? manalyzer takes object life times very seriously.
> 
> The call stores the ODB string in static members of the midas::odb class. So these will have a lifetime of the process or until they're replaced by another 
> call. When a midas::odb is instantiated it reads from these static members and then that data has the lifetime of that instance.

this is the behavious we need to modify.

> > Of course with this patch extending manalyzer to process two or more runs at the same time becomes impossible.
> Yes, I hadn't realised that was an option.

It is an option I would like to keep open. Not too many use cases, but imagine a "split brain" experiment
that has two MIDAS instances record data into two separate midas files. (if LIGO were to use MIDAS,
consider LIGO Hanford and LIGO Livingston).

Assuming data in these two data sets have common precision timestamps,
our task is to assemble data from two input files into single physics events. The analyzer will need
to read two input files, each file with it's run number, it's own ODB dump, etc, process the midas
events (unpack, calibrate, filter, etc), look at the timestamps, assemble the data into physics events.

This trivially generalizes into reading 2, 3, or more input files.

> For that to work I guess the aforementioned static members could be made thread local storage, and 
> processing of each run kept to a specific thread. Although I could imagine user code making assumptions and breaking, like storing a midas::odb as a 
> class member or something.

manalyzer is already multithreaded, if you will need to keep track of which thread should see which odbxx global object,
seems like abuse of the thread-local storage idea and intent.

> Note that I missed doing the same for the end of run event, which should probably also be added.

Ideally, the memory sanitizer will flag this for us, complain about anything that odbxx.clear() failes to free.

K.O.
  3085   22 Sep 2025 Stefan RittSuggestionGet manalyzer to configure midas::odb when running offline
> > I will work today on the odbxx API to make sure there are no memory leaks when you switch form one file to another. I talked to KO so he agreed that yo then commit your proposed change of manalyzer 
> 
> That, and add a "clear()" method that resets odbxx state to "empty". I will call odbxx.clear() everywhere where I call "delete fOdb;" (TARunInfo::dtor and other places).

No need for clear(), since no memory gets allocated by midas::odd::set_odb_source(). All it does is to remember the file name. When you instantiate a midas::odd object, the file gets loaded, and the midas::odd object gets initialized from the file contents. Then the buffer 
gets deleted (actually it's a simple local variable). Of course this causes some overhead (each midas::odd() constructor reads the whole file), but since the OS will cache the file, it's probably not so bad.

Stefan
  3086   22 Sep 2025 Stefan RittSuggestionGet manalyzer to configure midas::odb when running offline
> > > Of course with this patch extending manalyzer to process two or more runs at the same time becomes impossible.
> > Yes, I hadn't realised that was an option.
> 
> It is an option I would like to keep open. Not too many use cases, but imagine a "split brain" experiment
> that has two MIDAS instances record data into two separate midas files. (if LIGO were to use MIDAS,
> consider LIGO Hanford and LIGO Livingston).
> 
> Assuming data in these two data sets have common precision timestamps,
> our task is to assemble data from two input files into single physics events. The analyzer will need
> to read two input files, each file with it's run number, it's own ODB dump, etc, process the midas
> events (unpack, calibrate, filter, etc), look at the timestamps, assemble the data into physics events.
> 
> This trivially generalizes into reading 2, 3, or more input files.
> 
> > For that to work I guess the aforementioned static members could be made thread local storage, and 
> > processing of each run kept to a specific thread. Although I could imagine user code making assumptions and breaking, like storing a midas::odb as a 
> > class member or something.
> 
> manalyzer is already multithreaded, if you will need to keep track of which thread should see which odbxx global object,
> seems like abuse of the thread-local storage idea and intent.

I made the global variables storing the file name of type "thread_local", so each thread gets it's own copy. This means however that each thread must then call midas::odb::set_odb_source() individually before 
creating any midas::odb objects. Interestingly enough I just learned that thread_local (at least under linux) is almost zero overhead, since these variable are placed by the linker into a separate memory space which is 
separate for each thread, so accessing them only means to add a memory offset.

Let's see how far we get with this...

Stefan
  3099   26 Sep 2025 Mark GrimesSuggestionGet manalyzer to configure midas::odb when running offline
> ...I talked to KO so he agreed that yo then commit your proposed change of manalyzer 

Merged and pushed.

Thanks,

Mark.
  Draft   26 Sep 2025 Konstantin OlchanskiSuggestionGet manalyzer to configure midas::odb when running offline
> > ...I talked to KO so he agreed that yo then commit your proposed change of manalyzer 
> Merged and pushed.

negative. as I already 
  3057   11 Jun 2025 Stefan RittInfoFrontend write cache size
We had issues with the frontend write cache size and the way it was implemented in the frontend 
framework mfe. Specifically, we had two equipments like in the experiment/examples/frontend.cxx, one 
trigger event and one periodic event. While the trigger event sets the cache size correctly in the 
equipment table, the periodic event (defined afterwards) overwrites the cache size to zero, causing slow 
data taking. 

The underlying problem is that one event buffer (usually "SYSTEM") can only have one cache size for all 
events in one frontend. To avoid mishaps like that, I remove the write cache size from the equipment table 
and instead defined a function which explicitly sets the cache size for a buffer. In your frontend_init() you 
can now call

  set_cache_size("SYSTEM", 10000000);

to set the cache size of the SYSTEM buffer. Note that 10 MiB is the minimum cache size. Especially for 
smaller events, this dramatically can speed up your maximal DAQ rate.

Full commit is here:

  https://bitbucket.org/tmidas/midas/commits/24fbf2c02037ae5f7db74d0cadab23cd4bfe3b13

Note that this only affects frontends using the mfe framework, NOT for those using the tmfe framework.

Stefan
  2874   11 Oct 2024 Denis CalvetBug ReportFrontend name must differ from others by more than the last three characters
Hi,
I have developed two Midas front-end programs for different hardware. The frontend_name of the first one is "FSCD_SC" (slow control) and that of the second one is "FSCD_PS" (power supply).

Each front-end program runs fine separately, but when attempting to start FSCD_SC while FSCD_PS is running, FSCD_PS is terminated and Midas indicates "Previous frontend stopped" in the window where it starts FSCD_SC.

The problem is that these two frontend names only differ in their last two characters, and Midas currently does not distinguish them properly.

Looking in mfe.cxx we have:

int main(int argc, char *argv[])
{
...
/* shutdown previous frontend */
   status = cm_shutdown(full_frontend_name, FALSE);
...

And looking in midas.cxx we have:

INT cm_shutdown(const char *name, BOOL bUnique) {
...
if (!bUnique)
            client_name[strlen(name)] = 0;      /* strip number */
...

The above line removes the last 3 characters of the front-end name before the subsequent comparison with other frontend names. Stripping the last 3 characters of the front-end name is correct for frontend programs that use the "-i" command line option to specify an index for that frontend, but all the characters of the front-end name should otherwise be kept for comparison.

I have changed the names of my frontend programs to avoid the interference, but it would be nice that the code that determines if an instance of a frontend program is already running is corrected.

I hope this can help.

Best regards,
Denis.
  2875   11 Oct 2024 Stefan RittBug ReportFrontend name must differ from others by more than the last three characters
Hi Denis,

indeed a bug. Will fix it next week.

Best,
Stefan


> Hi,
> I have developed two Midas front-end programs for different hardware. The frontend_name of the first one is "FSCD_SC" (slow control) and that of the second one is "FSCD_PS" (power supply).
> 
> Each front-end program runs fine separately, but when attempting to start FSCD_SC while FSCD_PS is running, FSCD_PS is terminated and Midas indicates "Previous frontend stopped" in the window where it starts FSCD_SC.
> 
> The problem is that these two frontend names only differ in their last two characters, and Midas currently does not distinguish them properly.
  2878   18 Oct 2024 Stefan RittBug ReportFrontend name must differ from others by more than the last three characters
Fixed and committed.

Best,
Stefan

> Hi Denis,
> 
> indeed a bug. Will fix it next week.
> 
> Best,
> Stefan
> 
> 
> > Hi,
> > I have developed two Midas front-end programs for different hardware. The frontend_name of the first one is "FSCD_SC" (slow control) and that of the second one is "FSCD_PS" (power supply).
> > 
> > Each front-end program runs fine separately, but when attempting to start FSCD_SC while FSCD_PS is running, FSCD_PS is terminated and Midas indicates "Previous frontend stopped" in the window where it starts FSCD_SC.
> > 
> > The problem is that these two frontend names only differ in their last two characters, and Midas currently does not distinguish them properly.
  1597   08 Jul 2019 Vinzenz BildsteinBug ReportFrontend killed at stop of run
I wrote a c++ frontend to read data from CAEN VX1730 digitizers which is used in
parallel with the GRIFFIN frontend to read out DESCANT.

After a long overnight run to check that the frontend runs smoothly for a longer
time, I stopped the run and the frontend was killed by midas. I am not sure why
this happened, as the end_of_run function returned successfully (at least the
print statement right before "return SUCCESS;" appeared right away). So
something else must have timed-out and caused it to be killed, I guess?

Any suggestions on where to look to find out what causes this?

Thanks in advance for your help!
ELOG V3.1.4-2e1708b5