ID |
Date |
Author |
Topic |
Subject |
2819
|
02 Sep 2024 |
Marius Koeppel | Suggestion | Improve Event Documentation | > > I am writing a Rust based midas file reader
>
> You might find this library I wrote useful: https://crates.io/crates/midasio
>
> It should "just work", and if it doesn't, I would be interested to know.
Nice! I did not know about this. I have now also one simple reader but yours looks much more advanced. My
overall idea here is to connect directly to midas so having some frontend features to analyze the data etc. do
you also have already a library for this? I can also extend your stuff.
Best,
Marius |
2847
|
16 Sep 2024 |
Marius Koeppel | Bug Report | Crash using ODB watch | This is not the case here. Note that the error message: "Callback received for a midas::odb object which went out of scope" is not called! The segmentation fault happens later line 96.
> The answer is in the error message: „Object went out of scope“. When your frontent_init() exits, the odb objects are destroyed. When you get a callback, it‘s linked to the
> destroyed object. This is like if you have a local string and pass a reference to that string in the return of the function.
>
> Use a global object (bad) or use „new“ (potential memory leak). I would use a global structure which holds all odb objects.
>
> Stefan
>
> >
> > last week I was running MIDAS with the commit 3ad98c5. Today I updated MIDAS and now all my watch functions are crashing. Attached I have a minimal example frontend of the problem.
> >
> > In our software we have two functions one which sets up the ODB values of the frontend and another one which sets up all watch functions. So overall we connect two time to the ODB during fronend_init one time to create the values and one time to create the watch. In the example code a simple version of this setup is shown:
> >
> > INT frontend_init() {
> >
> > cm_msg(MINFO, "frontend_init() setup", "Test FE");
> >
> > odb settings = {
> > {"Test", 123},
> > {"sub", {}}
> > };
> > settings.connect_and_fix_structure("/Equipment/Test FE/Settings");
> > // settings.watch(watch); <-- this works without segmentation fault
> >
> > odb new_settings("/Equipment/Test FE/Settings");
> > new_settings.watch(watch); // <-- here I am getting a segmentation fault
> >
> > return CM_SUCCESS;
> > }
> >
> > When I directly set the watch everything runs fine however, when I create a new ODB object and use this one to set a watch I am getting the following segmentation fault:
> >
> > Process 18474 stopped
> > * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x34)
> > frame #0: 0x000000010004fa38 test_fe`midas::odb::watch_callback(hDB=<unavailable>, hKey=<unavailable>, index=0, info=0x00006000002001c0) at odbxx.cxx:96:25 [opt]
> > 93 if (po->m_data == nullptr)
> > 94 mthrow("Callback received for a midas::odb object which went out of scope");
> > 95 midas::odb *poh = search_hkey(po, hKey);
> > -> 96 poh->m_last_index = index;
> > 97 po->m_watch_callback(*poh);
> > 98 poh->m_last_index = -1;
> > 99 }
> >
> > Best,
> > Marius |
2849
|
16 Sep 2024 |
Marius Koeppel | Bug Report | Crash using ODB watch | Okay, but this is then a big issue IMO. For Mu3e we do this in every frontend and I also checked again all of these watches are broken at the moment (with commit 3ad98c5 they worked).
In the old style we did for example (see https://bitbucket.org/tmidas/midas/src/develop/examples/crfe/crfe.cxx):
INT frontend_init()
{
HNDLE hKey;
// create Settings structure in ODB
db_create_record(hDB, 0, "Equipment/Clock Reset/Settings", strcomb1(cr_settings_str).c_str());
db_find_key(hDB, 0, "/Equipment/Clock Reset", &hKey);
assert(hKey);
db_watch(hDB, hKey, cr_settings_changed, NULL);
/*
* Set our transition sequence. The default is 500. Setting it
* to 600 means we are called AFTER most other clients.
*/
cm_set_transition_sequence(TR_START, 600);
return CM_SUCCESS;
}
I thought this will be the same (under the hood) in the current odbxx way via:
odb settings("Equipment/Clock Reset/Settings");
settings.watch(cr_settings_changed);
Best,
Marius
> Well, the object *went* out of scope. For my code it‘s hard to realize this, so the error reporting is poor. Also the first object should have the same
> problem. Just by accident that it does not crash.
>
> Stefan
>
> > This is not the case here. Note that the error message: "Callback received for a midas::odb object which went out of scope" is not called! The segmentation fault happens later line 96.
> >
> > > The answer is in the error message: „Object went out of scope“. When your frontent_init() exits, the odb objects are destroyed. When you get a callback, it‘s linked to the
> > > destroyed object. This is like if you have a local string and pass a reference to that string in the return of the function.
> > >
> > > Use a global object (bad) or use „new“ (potential memory leak). I would use a global structure which holds all odb objects.
> > >
> > > Stefan
> > >
> > > >
> > > > last week I was running MIDAS with the commit 3ad98c5. Today I updated MIDAS and now all my watch functions are crashing. Attached I have a minimal example frontend of the problem.
> > > >
> > > > In our software we have two functions one which sets up the ODB values of the frontend and another one which sets up all watch functions. So overall we connect two time to the ODB during fronend_init one time to create the values and one time to create the watch. In the example code a simple version of this setup is shown:
> > > >
> > > > INT frontend_init() {
> > > >
> > > > cm_msg(MINFO, "frontend_init() setup", "Test FE");
> > > >
> > > > odb settings = {
> > > > {"Test", 123},
> > > > {"sub", {}}
> > > > };
> > > > settings.connect_and_fix_structure("/Equipment/Test FE/Settings");
> > > > // settings.watch(watch); <-- this works without segmentation fault
> > > >
> > > > odb new_settings("/Equipment/Test FE/Settings");
> > > > new_settings.watch(watch); // <-- here I am getting a segmentation fault
> > > >
> > > > return CM_SUCCESS;
> > > > }
> > > >
> > > > When I directly set the watch everything runs fine however, when I create a new ODB object and use this one to set a watch I am getting the following segmentation fault:
> > > >
> > > > Process 18474 stopped
> > > > * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x34)
> > > > frame #0: 0x000000010004fa38 test_fe`midas::odb::watch_callback(hDB=<unavailable>, hKey=<unavailable>, index=0, info=0x00006000002001c0) at odbxx.cxx:96:25 [opt]
> > > > 93 if (po->m_data == nullptr)
> > > > 94 mthrow("Callback received for a midas::odb object which went out of scope");
> > > > 95 midas::odb *poh = search_hkey(po, hKey);
> > > > -> 96 poh->m_last_index = index;
> > > > 97 po->m_watch_callback(*poh);
> > > > 98 poh->m_last_index = -1;
> > > > 99 }
> > > >
> > > > Best,
> > > > Marius |
2852
|
18 Sep 2024 |
Marius Koeppel | Bug Report | Crash using ODB watch | I created a PR to fix this issue https://bitbucket.org/tmidas/midas/pull-requests/42.
The crash happened since the change in commit 3ad98c5 always got the ODB via XML.
However, the creation from XML should only be used when a user wants to read fast (and when we are on a remote machine) so I added the flag use_from_xml to explicitly specify this.
> > {
> > odb new_settings("/Equipment/Test FE/Settings");
> > new_settings.watch(watch); // <-- here I am getting a segmentation fault
> > }
>
> this code has a bug. "watch" is attached to object "new_settings" that is deleted
> after the closing curly bracket.
> I would say Stefan's odb API should not allow you to write code like this. an API defect.
As pointed out in the thread this feature is explicitly supported by odbxx.cxx:
void odb::watch(std::function<void(midas::odb &)> f) {
if (m_hKey == 0 || m_hKey == -1)
mthrow("watch() called for ODB key \"" + m_name +
"\" which is not connected to ODB");
// create a deep copy of current object in case it
// goes out of scope
midas::odb* ow = new midas::odb(*this);
ow->m_watch_callback = f;
db_watch(s_hDB, m_hKey, midas::odb::watch_callback, ow);
// put object into watchlist
g_watchlist.push_back(ow);
}
Also in the old way (see for example https://bitbucket.org/tmidas/midas/src/191d13f98626fae533cbca17b00df7ee361edf16/examples/crfe/crfe.cxx#lines-126) it was possible to create a watch in a scope without the user taking care that the "object" does not go out of scope.
I think this feature should be supported by the framework.
Best,
Marius |
2022
|
25 Nov 2020 |
Marco Francesconi | Suggestion | ODBSET wildcards with array keys in Sequencer files | Hi,
I guess the issue is in the "[?]" part of the command, the indexing is handled differently from the odb path and does not
support "?".
Are you trying to set only the first 9 channels?
Could you try with "[*]" or "[0-9]" instead?
Marco
> I'm interested in using the matching feature for ODBSET explained on
> https://midas.triumf.ca/MidasWiki/index.php/Sequencer for settings that are in an
> array, like:
>
> COMMENT "Ground the detectors"
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[?]" 0
>
> Currently I get an error when I try to run this script. Is this expected? Would it
> be possible to implement matching for array values?
>
> Thanks! |
2024
|
25 Nov 2020 |
Marco Francesconi | Suggestion | ODBSET wildcards with array keys in Sequencer files | I created some keys in my ODB to try to match yours.
The ODBSET commands you wrote are all working fine (of course with different results), except only for the "/Detectors/Det*/Settings/Charge/Bias (V)*" which I will have to
look into.
In any case the error message I'm getting is "could not match ay key" and not the one you are reporting.
Now I'm a bit puzzled:
Are you sure your ODB contains those keys?
Are you testing the ODBSET inside a more complex sequencer or on its own?
Maybe I can try to reproduce it using your ODB setup.
Could you send an ODB dump of the "/Detectors" folder using the "save" command of odbedit ("cd /Detectors" and then "save detector.odb")?
Best,
Marco
> The following all fail with "Cannot find ODB key "<key>""
>
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[*]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[0-9]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[1]" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)*" 0
> ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)" 0
>
>
> > Hi,
> > I guess the issue is in the "[?]" part of the command, the indexing is handled differently from the odb path and does not
> > support "?".
> > Are you trying to set only the first 9 channels?
> > Could you try with "[*]" or "[0-9]" instead?
> >
> > Marco
> >
> > > I'm interested in using the matching feature for ODBSET explained on
> > > https://midas.triumf.ca/MidasWiki/index.php/Sequencer for settings that are in an
> > > array, like:
> > >
> > > COMMENT "Ground the detectors"
> > > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[?]" 0
> > >
> > > Currently I get an error when I try to run this script. Is this expected? Would it
> > > be possible to implement matching for array values?
> > >
> > > Thanks! |
2038
|
30 Nov 2020 |
Marco Francesconi | Suggestion | ODBSET wildcards with array keys in Sequencer files | I totally agree that we should have a consistent formatting for array index expansion.
I had a look to the mjsonrpc code and I found the function parse_array_index_list(...) which does this job.
I have a similar function (adapted form previous code) in odb.cxx called strarrayindex(...) that is designed for the same "consistency" purposes between odbedit and sequencer.
Let me put few points that I noticed:
- mjsonrpc has a very different way to write the full array (no indexes given) while currently sequencer requires "[*]" to do the same (otherwise it only changes the first value of the array)
- currently the sequencer and the underlying ODB calls use two indexes (that are the same if you want to write only one key) so we will need a serious rewriting to allow something like "ODBSET array[1,3,5]"
- if I correctly understood the code, mjsonrpc instead generates a list of indices and then calls an ODB write on each of them. That's not always a good thing, for example if you are writing an array of n parameters on a DAQ
board you will call the hotlink on that key n times
- in addition to that the sequencer will also have to cope with variable-based indexes like "ODBSET array[$val]", but then how it should parse something like "[$a,1]" or "[$a*]"?
For the very first point I do not see a clean way to do this without breaking the compatibility of existing sequencers or having a difference between the two implementations.
For the others I guess we can find a way out, however that's a major modification so I will put it on my todo list when I can find some free time.
In any case I would propose to merge the two functions, so we have only to maintain a single implementation of the parsing.
I guess it's a good moment to brainstorm about that, let me know what you think
Marco
> > The following all fail with "Cannot find ODB key "<key>""
> >
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[*]" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[0-9]" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)[1]" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)*" 0
> > ODBSET "/Detectors/Det*/Settings/Charge/Bias (V)" 0
> >
>
> It would be cool if ODB pattern matching in the sequencer
> were consistent with the ODB pattern matching in the json-rpc
> interface for web pages:
>
> https://midas.triumf.ca/MidasWiki/index.php/Mjsonrpc#Supported_array_index_syntax
>
> K.O. |
2218
|
16 Jun 2021 |
Marco Francesconi | Info | 1000 Mbytes/sec through midas achieved! | As reported by Stefan, in MEG II we have very similar ethernet throughputs.
In total, we have 34 crates each with 32 DRS4 digitiser chips and a single 1 Gbps readout link through a Xilinx Zynq SoC.
The data arrives in push mode without any external intervention, the only throttling being an optional prescaling on the trigger rate.
We discovered the hard way that 1 Gbps throughput on Zynq is not trivial at all: the embedded ethernet MAC does not support jumbo frames (always read the fine prints in the manuals!) and the embedded Linux ethernet stack seems to struggle when we go beyond 250 Mbps of UDP traffic.
Anyhow, even with the reduced speed, the maximum throughput at network input is around 8.5 Gbps which passes through the Mikrotik switches mentioned by Stefan.
We had very bad experiences in the past with similar price-point switches, observing huge packet drops when the instantaneous switching capacity cannot cope with the traffic, but so far we are happy with the Mikrotik ones.
On the receiver side, we have the DAQ server with an Intel E5-2630 v4 CPU and a 10 Gbit connection to the network using an Intel X710 Network card.
In the past, we used also a "cheap" 10 Gbit card from Tehuti but the driver performance was so bad that it could not digest more than 5 Gbps of data.
The current frontend is based on the mfe.c scheme for historical reasons (the very first version dates back to 2015).
We opted for a monolithic multithread solution so we can reuse the underlying DAQ code for other experiments which may not have the complete Midas backend.
Just to mention them: one is the FOOT experiment (which afaik uses an adapted version of Altas DAQ) and the other is the LOLX experiment (for which we are going to ship to Canada soon a small 32 channel system using Midas).
A major modification to Konstantin scheme is that we need to calibrate all WFMs online so that a software zero suppression can be applied to reduce the final data size (that part is still to be implemented).
This requirement results in additional resource usage to parse the UDP content into floats and calibrate them.
Currently, we have 7 packet collector threads to digest the full packet flow (using recvmmsg), followed by an event building stage that uses 4 threads and 3 other threads for WFM calibration.
We have progressive packet numbers on each packet generated by the hardware and a set of flags marking the start and end of the event; combining the packet number difference between the start and end of the event and the total received packets for that event it is really easy to understand if packet drops are happening.
All the thread infrastructure was tested and we could digest the complete throughput, we still have to finalise the full 10 Gbit connection to Midas because the final system has been installed only recently (April).
We are using EQ_USER flag to push events into mfe.c buffers with up to 4 threads, but I was observing that above ~1.5 Gbps the rb_get_wp() returns almost always DB_TIMEOUT and I'm forced to drop the event.
This conflicts with the measurements reported by Stefan (we were discussing this yesterday), so we are still investigating the possible cause.
It is difficult to report three years of development in a single Elog, I hope I put all the relevant point here.
It looks to me that we opted for very complementary approaches for high throughput ethernet with Midas, and I think there are still a lot of details that could be worth reporting.
In case someone organises some kind of "virtual workshop" on this, I'm willing to participate.
Best,
Marco
> In MEG II we also kind of achieved this rate. Marco F. will post an entry soon to describe the details. There is only one thing
> I want to mention, which is our network switch. Instead of an expensive high-grade switch, we chose a cheap "Chinese" high-grade
> switch. We have "rack switches", which are collector switch for each rack receiving up to 10 x 1GBit inputs, and outputting 1 x
> 10 GBit to an "aggregation switch", which collects all 10 GBit lines form rack switches and forwards it with (currently a single
> ) 10 GBit line. For the rack switch we use a
>
> MikroTik CRS354-48G-4S+2Q+RM 54 port
>
> and for the aggregation switch
>
> MikroTik CRS326-24S-2Q+RM 26 Port
>
> both cost in the order of 500 US$. We were astonished that they don't loose UDP packets when all inputs send a packet at the
> same time, and they have to pipe them to the single output one after the other, but apparently the switch have enough buffers
> (which is usually NOT written in the data sheets).
>
> To avoid UDP packet loss for several events, we do traffic shaping by arming the trigger only when the previous event is
> completely received by the frontend. This eliminates all flow control and other complicated methods. Marco can tell you the
> details.
>
> Another interesting aspect: While we get the data into the frontend, we have problems in getting it through midas. Your
> bm_send_event_sg() is maybe a good approach which we should try. To benchmark the out-of-the-box midas, I run the dummy frontend
> attached on my MacBook Pro 2.4 GHz, 4 cores, 16 GB RAM, 1 TB SSD disk. I got
>
> Event size: 7 MB
>
> No logging: 900 events/s = 6.7 GBytes/s
>
> Logging with LZ4 compression: 155 events/s = 1.2 GBytes/s
>
> Logging without compression: 170 events/s = 1.3 GBytes/s
>
> So with this simple approach I got already more than 1 GByte of "dummy data" through midas, indicating that the buffer
> management is not so bad. I did use the plain mfe.c frontend framework, no bm_send_event_sg() (but mfe.c uses rpc_send_event() which is an
> optimized version of bm_send_event()).
>
> Best,
> Stefan |
2235
|
25 Jun 2021 |
Marco Francesconi | Bug Fix | changes in history plots | We are using the new history formula as a quick way to convert signals from sensors to actual physical values (for example Voltage->Temperature, Voltage->relative humidity
...), so it is great that the shown voltage is the calculated one.
I would like to add a point to this discussion.
In our collaboration people attach images of history plots to elogs, meeting presentation and/or physical logbooks.
The proposed scaling formula may work fine online using the cursors, but, once an image is created, I do not understand how it is possible to extract the value for a scaled
variables.
Suppose you see a graph in a presentation with a current increase by some PSU and the current was scaled to be in the same plot of the voltage.
Looking at the delta in the image, how can you judge the current increase without any axis/grid to refer to?
So I support Stefan proposal for a secondary axis, as long as it is clear which value belong to which axis.
Maybe marking the channels in the description or using different line styles/thickness?
Best,
Marco |
2240
|
28 Jun 2021 |
Marco Francesconi | Suggestion | ODB Load in Sequencer | Hi all,
for my experiment we ended up with the need of changing lot of parameters (~9000 values) in the ODB at once by the sequencer.
The very first solution was to use a sequencer function with a ton of ODBSET calls, however a more elegant solution may be to provide an "ODBLoad" command which mimics the "load" command of odbedit.
I already have a working modification to the sequencer for this, if you agree I will commit it to a dedicated brach.
Let me know if you think this is a good approach.
Marco F |
2247
|
28 Jun 2021 |
Marco Francesconi | Suggestion | ODB Load in Sequencer | My idea was to collect some feedback instead of blindly submitting code for a pull request.
Currently I'm just calling db_load() with a given file, so it is only supporting .odb formatting.
It is pretty easy to extend to json by calling the db_load_json() depending on the file extension.
I do not see a similar call for the .xml format, maybe I can study tomorrow how it is implemented in odbedit and port it to the sequencer.
I guess that the ODBPasteJSON can be a solution as well but I find it a bit too technical.
Anyway it is easy to implement just by calling db_paste_json(), I will keep this in mind.
I'll try to sort this out and make a commit soon.
Best,
Marco
> > ... at MEG, we keep hundreds of XML files for configuration. Mostly historical, but that's how it is.
>
> same here, lots of historical .odb and .xml files.
>
> I think the .odb and .xml support is here to stay. Best I remember, latest things I fixed in both
> was support for unlimited string length (and removal of associated buffer overflows). Right now,
> I am not sure if both are UTF-8 clean and if they properly escape all control characters,
> something to fix as we go or as we bump into problems.
>
> K.O. |
2248
|
29 Jun 2021 |
Marco Francesconi | Suggestion | ODB Load in Sequencer | I just submitted a pull request for this feature, I did quite a lot of testing and it looks good to me.
Let me know if something is not clear.
I'll take care of adding the relevant informations to the wiki once it is merged.
Best,
Marco
> My idea was to collect some feedback instead of blindly submitting code for a pull request.
>
> Currently I'm just calling db_load() with a given file, so it is only supporting .odb formatting.
> It is pretty easy to extend to json by calling the db_load_json() depending on the file extension.
> I do not see a similar call for the .xml format, maybe I can study tomorrow how it is implemented in odbedit and port it to the sequencer.
>
> I guess that the ODBPasteJSON can be a solution as well but I find it a bit too technical.
> Anyway it is easy to implement just by calling db_paste_json(), I will keep this in mind.
>
> I'll try to sort this out and make a commit soon.
> Best,
>
> Marco
>
>
>
> > > ... at MEG, we keep hundreds of XML files for configuration. Mostly historical, but that's how it is.
> >
> > same here, lots of historical .odb and .xml files.
> >
> > I think the .odb and .xml support is here to stay. Best I remember, latest things I fixed in both
> > was support for unlimited string length (and removal of associated buffer overflows). Right now,
> > I am not sure if both are UTF-8 clean and if they properly escape all control characters,
> > something to fix as we go or as we bump into problems.
> >
> > K.O. |
2533
|
13 Jun 2023 |
Marco Francesconi | Forum | Include subroutine through relative path in sequencer | > > Hi, I would like to restructure our sequencer scripts and the paths. Until now many things are not generic at all. I would like to ask if it is possible to include files through a relative path for example something like
> > INCLUDE ../chip/global_basic_functions
> > Maybe I just did not found how to do it.
>
> It was not there. I implemented it in the last commit.
>
> Stefan
Hi Stefan,
when I did this job for MEG II we decided not to include relative paths and the ".." folder to avoid an exploit called "XML Entity Injection".
In short is to avoid leaking files outside the sequencer folders like /etc/password or private SSH keys.
I do not remember in this moment why we pushed for absolute paths instead but let's keep this in mind.
Marco |
2173
|
26 May 2021 |
Marco Chiappini | Info | label ordering in history plot | Dear all,
is there any way to order the labels in the history plot legend? In the old
system there was the “order” column in the config panel, but I can not find it
in the new system. Thanks in advance for the support.
Best regards,
Marco Chiappini |
2597
|
12 Sep 2023 |
Maia Henriksson-Ward | Suggestion | Syntax highlighting for sequencer scripts | Recently I was trying to read sequencer scripts written by a previous student, and realized it would be easier to
quickly read/skim sequencer code with some form of syntax highlighting. I've been using Visual Studio Code as my
editor, so I made myself an extension for VS Code that provides basic syntax highlighting (with help from
ChatGPT-3.5). It's good enough for my purposes, but is missing some features you'd expect for full language
support. This got me wondering - does anything like this already exist, perhaps with more complete support?
If it doesn't already exist, and if there is interest, I could to publish mine
to vscode's "Extension Marketplace" for easy installations (I'd also welcome contributions for
more features). For now, I've installed it on my computer directly from the .vsix file, which I've put on my own
github at https://github.com/maia-hw/midas-sequencer-support . There is also a readme with screenshot showing what scripts
will look like with the highlighting |
2718
|
26 Feb 2024 |
Maia Henriksson-Ward | Forum | mserver ERR message saying data area 100% full, though it is free | > Hi,
>
> I have just installed Midas and set-up the ODB for a SuperCDMS test-facility (on
> a SL6.7 machine). All works fine except that I receive the following error message:
>
> [mserver,ERROR] [odb.c:944:db_validate_db,ERROR] Warning: database data area is
> 100% full
>
> Which is puzzling for the following reason:
>
> -> I have created the ODB with: odbedit -s 4194304
> -> Checking the size of the .ODB.SHM it says: 4.2M
> -> When I save the ODB as .xml and check the file's size it says: 1.1M
> -> When I start odbedit and check the memory usage issuing 'mem', it says:
> ...
> Free Key area: 1982136 bytes out of 2097152 bytes
> ...
> Free Data area: 2020072 bytes out of 2097152 bytes
> Free: 1982136 (94.5%) keylist, 2020072 (96.3%) data
>
> So it seems like nearly all memory is still free. As a test I created more
> instances of one of our front-ends and checked 'mem' again. As expected the free
> memory was decreasing. I did this ten times in fact, reaching
>
> ...
> Free Key area: 1440976 bytes out of 2097152 bytes
> ...
> Free Data area: 1861264 bytes out of 2097152 bytes
> Free: 1440976 (68.7%) keylist, 1861264 (88.8%) data
>
> So I could use another >20% of the database data area, which is according to the
> error message 100% (resp. >95%) full. Am I misunderstanding the error message?
> I'd appreciate any comments or ideas on that subject!
>
> Thanks, Belina
This is an old post, but I encountered the same error message recently and was looking for a
solution here. Here's how I solved it, for anyone else who finds this:
The size of .ODB.SHM was bigger than the maximum ODB size (4.2M > 4194304 in Belina's case). For us,
the very large odb size was in error and I suspect it happened because we forgot to shut down midas
cleanly before shutting the computer down. Using odbedit to load a previously saved copy of the ODB
did not help me to get .ODB.SHM back to a normal size. Following the instructions on the wiki for
recovery from a corrupted odb,
https://daq00.triumf.ca/MidasWiki/index.php/FAQ#How_to_recover_from_a_corrupted_ODB, (odbinit with --cleanup option) should
work, but didn't for me. Unfortunately I didn't save the output to figure out why. My solution was to manually delete/move/hide
the .ODB.SHM file, and an equally large file called .ODB.SHM.1701109528, then run odbedit again and reload that same saved copy of my ODB.
Manually changing files used by mserver is risky - for anyone who has the same problem, I suggest trying odbinit --cleanup -s
<yoursize> first. |
437
|
19 Feb 2008 |
Maggie Lee | Bug Fix | "make install" error on MacOS 10.4.7, svn 3366 | > While executing "make install" under MacOS 10.4.7, you may encounter errors about "dio". It is the
> problem of "Makefile". I did some change to it and attach the diff file here.
Thank you very much for your instructions for installing Midas on MacOSX.
I followed your instructions to change the Makefile but I still get the following error message:
...
... Installing programs and utilities to /usr/local/bin
...
install: darwin/bin/lazylogger exists but is not a directory
install: darwin/bin/mchart exists but is not a directory
install: darwin/bin/mcnaf exists but is not a directory
install: darwin/bin/mdump exists but is not a directory
install: darwin/bin/melog exists but is not a directory
install: darwin/bin/mhdump exists but is not a directory
install: darwin/bin/mhist exists but is not a directory
install: darwin/bin/mhttpd exists but is not a directory
install: darwin/bin/mlogger exists but is not a directory
install: darwin/bin/mlxspeaker exists but is not a directory
install: darwin/bin/mserver exists but is not a directory
install: darwin/bin/mstat exists but is not a directory
install: darwin/bin/mtape exists but is not a directory
install: darwin/bin/odbedit exists but is not a directory
install: darwin/bin/odbhist exists but is not a directory
install: darwin/bin/stripchart.tcl exists but is not a directory
install: darwin/bin/webpaw exists but is not a directory
make: *** [install] Error 71
Could you help me solve this problem? Thank you in advance =) |
438
|
19 Feb 2008 |
Maggie Lee | Bug Fix | "make install" error on MacOS 10.4.7, svn 3366 | I forgot to mention that, the following (and similar) lines:
install -v -D -m 755 $$file $(SYSBIN_DIR)/`basename $$file` ; \
are changed into
install -v -d -m 755 $$file $(SYSBIN_DIR)/`basename $$file` ; \
since -D is an illegal option for install. I am not sure whether -D in Linux means the same thing for -d in MacOSX install.
> > While executing "make install" under MacOS 10.4.7, you may encounter errors about "dio". It is the
> > problem of "Makefile". I did some change to it and attach the diff file here.
>
> Thank you very much for your instructions for installing Midas on MacOSX.
> I followed your instructions to change the Makefile but I still get the following error message:
>
> ...
> ... Installing programs and utilities to /usr/local/bin
> ...
> install: darwin/bin/lazylogger exists but is not a directory
> install: darwin/bin/mchart exists but is not a directory
> install: darwin/bin/mcnaf exists but is not a directory
> install: darwin/bin/mdump exists but is not a directory
> install: darwin/bin/melog exists but is not a directory
> install: darwin/bin/mhdump exists but is not a directory
> install: darwin/bin/mhist exists but is not a directory
> install: darwin/bin/mhttpd exists but is not a directory
> install: darwin/bin/mlogger exists but is not a directory
> install: darwin/bin/mlxspeaker exists but is not a directory
> install: darwin/bin/mserver exists but is not a directory
> install: darwin/bin/mstat exists but is not a directory
> install: darwin/bin/mtape exists but is not a directory
> install: darwin/bin/odbedit exists but is not a directory
> install: darwin/bin/odbhist exists but is not a directory
> install: darwin/bin/stripchart.tcl exists but is not a directory
> install: darwin/bin/webpaw exists but is not a directory
> make: *** [install] Error 71
>
> Could you help me solve this problem? Thank you in advance =) |
441
|
19 Feb 2008 |
Maggie Lee | Bug Fix | "make install" error on MacOS 10.4.7, svn 3366 | Thank you for your help =)
Since SYSBIN_DIR is defined as /usr/local/bin in the Makefile and it exists in my computer, so I deleted the -D in the Makefile and tried to "make install" again and the
error message becomes:
...
... Installing programs and utilities to /usr/local/bin
...
/bin/sh: -c: line 2: syntax error: unexpected end of file
make: *** [install] Error 2
Can anyone help me solve this problem?
> > I forgot to mention that, the following (and similar) lines:
> > install -v -D -m 755 $$file $(SYSBIN_DIR)/`basename $$file` ; \
> > are changed into
> > install -v -d -m 755 $$file $(SYSBIN_DIR)/`basename $$file` ; \
> >
> > since -D is an illegal option for install. I am not sure whether -D in Linux means the same thing for -d in MacOSX install.
>
> -D under linux means:
>
> -D create all leading components of DEST except the last, then
> copy SOURCE to DEST; useful in the 1st format
>
> This means if you install the first time, and eithe SYSBIN_DIR or `basename is not existing, it will be created on-the-fly from
> the install program. If OSX does not support this, you somehow have to crate these subdirectories manually. |
1352
|
12 Mar 2018 |
Lukas Gerritzen | Forum | EQ_MANUAL_TRIG no button in web interface | Hi,
according to the wiki, setting the equipment flag EQ_MANUAL_TRIG is supposed to
have the mhttpd webinterface provide a button for manual triggering. It appears that just setting this flag is not enough or this feature is broken. The equipment shows up, but no button to manually trigger it.
A somewhat related question: Can I log this kind of event while the current run is stopped or is it necessary to start a dedicated run for this?
Cheers
Lukas |
|