ID |
Date |
Author |
Topic |
Subject |
1059
|
15 May 2015 |
Konstantin Olchanski | Suggestion | checksums for midas data files | > > Any thoughts on this?
>
> We use binary midas files now for ~20 years and never felt the necessity to put any checksums or even encryption on these files ...
>
"I have never seen a corrupted file, therefore nobody should ever need checksums". Well,
1) actually if you write mid.gz files, you get gzip checksums "for free" (but the checksums are not recorded anywhere, so 5 years later you cannot confirm that the file did not change).
2) I had a defective computer once where reading the same file several times yielded different data. (the defect was on the motherboard, not in the disks)
3) I am presently testing the btrfs filesystem which (like ZFS) keeps checksums for all data. For these tests I am using 3rd quality disks and I see btrfs regularly detect (and correct) "data corruption" events - where data on disk has changed.
4) there was a report from CERN(?) where they checked the checksums on a large number of data files and found a good number of corrupted files.
So bit rot does exist.
In more practical terms:
a) CRC32C is "free" to compute (hardware accelerated on latest CPUs), but does not detect malicious file modifications
b) SHA256 does detect that (but for how long?), but probably too expensive to compute (speed measurement TBD).
c) gzip compressed files have internal whole-file CRC32
d) bzip2 compressed files have internal per-block CRC32
e) lz4 compressed files have internal per-block xxhash checksums
Personally, when dealing with compressed files, I prefer to have a checksum recoded somewhere that I can check against after I decompress the file.
I think there is no need to add checksums to the MIDAS data files format itself (see c,d,e above).
K.O. |
1119
|
28 Sep 2015 |
Anthony Villano | Suggestion | Feature Request: MIDAS sequencer abort. |
I am working for the SuperCDMS collaboration on some DAQ issues for our upcoming
SNOLAB installation. So far, the MIDAS sequencer seems to be a good paradigm
for us to do procedural tasks for our detectors and data running interspersed
with other protocols.
In our testing we've found that the sequencer works very well for this kind of
activity, although it would be useful to have a kind of scripted "abort" for
when something goes wrong -- especially if the user selects to abort a run
sequence.
Because the sequencer is setting various detector parameters to a certain value
before performing the tasks, the values will never be restored if the user
aborts the sequence. Instead, perhaps there can be a portion of a MIDAS
sequence script which is instructed to happen on an abort. Perhaps something
like all commands after a given tag like:
ON ABORT:
get run on a user-initiated abort? |
1123
|
22 Oct 2015 |
Konstantin Olchanski | Suggestion | Feature Request: MIDAS sequencer abort. | > it would be useful to have a kind of scripted "abort" for when something goes wrong ...
How about having the sequencer switching from the aborted sequence file to the special "abort" sequence file? That
should be simple to implement if it is not already there.
K.O. |
1124
|
22 Oct 2015 |
Stefan Ritt | Suggestion | Feature Request: MIDAS sequencer abort. | > > it would be useful to have a kind of scripted "abort" for when something goes wrong ...
>
> How about having the sequencer switching from the aborted sequence file to the special "abort" sequence file? That
> should be simple to implement if it is not already there.
>
> K.O.
It's about the same effort if we jump to a specific label in a script or to a separate script. I just have to find some time to implement it.
Stefan |
1125
|
24 Oct 2015 |
Stefan Ritt | Suggestion | Feature Request: MIDAS sequencer abort. | > It would be useful to have it be specified for each script. Reason is that it's simpler, some scripts might only
> change a few sensitive settings, then on abort it only has to set back to "normal" what it touched to begin with.
> Also, the "normal" values are usually stored in local variables, so it's important to have those similarly accessible
> to the "Abort" portion of the script.
Agree. So I will put a special optional label, which will be accessed upon abort.
Stefan |
1150
|
10 Dec 2015 |
Amy Roberts | Suggestion | script command limited to 256 characters; remove limit? | Both the /Script and /CustomScript trees in the ODB allow users to trigger a
script via Midas - which silently truncates command strings longer than
256 characters.
I'd prefer that Midas place no limit on string length. Failing that, it would be
helpful to have character limits called out in the documentation
(https://midas.triumf.ca/MidasWiki/index.php//Script_ODB_tree#.3Cscript-name.3E_key_or_subtree,
https://midas.triumf.ca/MidasWiki/index.php//Customscript_ODB_tree).
As far as I can tell, odb.c allows arbitrarily large strings in the ODB data.
(Although key *names* are restricted to 256 characters.) I've submitted one
possible version of an arbitrary-length exec_script() as a pull request
(https://bitbucket.org/tmidas/midas/pull-requests/).
Am I misunderstanding any critical pieces? Does Midas intentionally treat
strings in the ODB as limited to 256 characters? |
1152
|
05 Jan 2016 |
Tom Stuttard | Suggestion | 64 bit bank type | I've seen that a similar question has been asked in 2011 but I'll ask again in
case there are any updates. Is there any way to write 64-bit data words to MIDAS
banks (other than breaking them up in to two 32-bit words, such as 2 DWORDs)
currently? And if not, is there any plan to introduce this feature in the future?
Many thanks,
Tom |
1153
|
05 Jan 2016 |
Konstantin Olchanski | Suggestion | 64 bit bank type | > I've seen that a similar question has been asked in 2011 but I'll ask again in
> case there are any updates. Is there any way to write 64-bit data words to MIDAS
> banks (other than breaking them up in to two 32-bit words, such as 2 DWORDs)
> currently? And if not, is there any plan to introduce this feature in the future?
There is no "breaking them up" as such, you can treat a midas bank as a char* array
and store arbitrary data inside. In this sense, "there is no need" for a special 64-bit bank type.
For endian-ness conversion (if such things still matter, big-endian PPC CPUs still exist), single 64-bit
word converts the same as two 32-bit words, so here also "there is no need", once can use banks of
DWORD with equal effect.
The above applies equally to 64-bit integers and 64-bit double-precision IEEE-754 floating point
numbers.
But specifically for 64-bit values, such as float64, there is a big gotcha.
The MIDAS banks structure goes to great lengths to make sure each data type is correctly aligned,
and gets it exactly wrong for 64-bit quantities - all because the bank header is three 32-bit words.
bankhheader1
bh2
bh3
bankdata1 <--- misaligned
...
bankdataN
bh1
bh2
bh3
banddata1 <--- aligned
... etc
So we could introduce QWORD banks today, but inside the midas file, they will be misaligned defeating
the only purpose of adding them.
I guess the misalignement could be cured by adding dummy words, dummy banks, dummy bank
headers, etc.
I figure this problem dates all the way bank where alignement to 16-bits was just getting important.
Today, in the VME word, I have to align things on 128-bit boundaries (for 2eSST 2x2 DWORD transfers).
So back to your question, what advantage do you see in using a QWORD bank instead of putting the
same data in a DWORD bank?
K.O. |
Draft
|
15 Jan 2016 |
Tom Stuttard | Suggestion | 64 bit bank type | > > I've seen that a similar question has been asked in 2011 but I'll ask again in
> > case there are any updates. Is there any way to write 64-bit data words to MIDAS
> > banks (other than breaking them up in to two 32-bit words, such as 2 DWORDs)
> > currently? And if not, is there any plan to introduce this feature in the future?
>
> There is no "breaking them up" as such, you can treat a midas bank as a char* array
> and store arbitrary data inside. In this sense, "there is no need" for a special 64-bit bank type.
>
> For endian-ness conversion (if such things still matter, big-endian PPC CPUs still exist), single 64-bit
> word converts the same as two 32-bit words, so here also "there is no need", once can use banks of
> DWORD with equal effect.
>
> The above applies equally to 64-bit integers and 64-bit double-precision IEEE-754 floating point
> numbers.
>
> But specifically for 64-bit values, such as float64, there is a big gotcha.
>
> The MIDAS banks structure goes to great lengths to make sure each data type is correctly aligned,
> and gets it exactly wrong for 64-bit quantities - all because the bank header is three 32-bit words.
>
> bankhheader1
> bh2
> bh3
> bankdata1 <--- misaligned
> ...
> bankdataN
> bh1
> bh2
> bh3
> banddata1 <--- aligned
> ... etc
>
> So we could introduce QWORD banks today, but inside the midas file, they will be misaligned defeating
> the only purpose of adding them.
>
> I guess the misalignement could be cured by adding dummy words, dummy banks, dummy bank
> headers, etc.
>
> I figure this problem dates all the way bank where alignement to 16-bits was just getting important.
> Today, in the VME word, I have to align things on 128-bit boundaries (for 2eSST 2x2 DWORD transfers).
>
> So back to your question, what advantage do you see in using a QWORD bank instead of putting the
> same data in a DWORD bank?
>
> K.O. |
1155
|
19 Jan 2016 |
Tom Stuttard | Suggestion | 64 bit bank type | > > I've seen that a similar question has been asked in 2011 but I'll ask again in
> > case there are any updates. Is there any way to write 64-bit data words to MIDAS
> > banks (other than breaking them up in to two 32-bit words, such as 2 DWORDs)
> > currently? And if not, is there any plan to introduce this feature in the future?
>
> There is no "breaking them up" as such, you can treat a midas bank as a char* array
> and store arbitrary data inside. In this sense, "there is no need" for a special 64-bit bank type.
>
> For endian-ness conversion (if such things still matter, big-endian PPC CPUs still exist), single 64-bit
> word converts the same as two 32-bit words, so here also "there is no need", once can use banks of
> DWORD with equal effect.
>
> The above applies equally to 64-bit integers and 64-bit double-precision IEEE-754 floating point
> numbers.
>
> But specifically for 64-bit values, such as float64, there is a big gotcha.
>
> The MIDAS banks structure goes to great lengths to make sure each data type is correctly aligned,
> and gets it exactly wrong for 64-bit quantities - all because the bank header is three 32-bit words.
>
> bankhheader1
> bh2
> bh3
> bankdata1 <--- misaligned
> ...
> bankdataN
> bh1
> bh2
> bh3
> banddata1 <--- aligned
> ... etc
>
> So we could introduce QWORD banks today, but inside the midas file, they will be misaligned defeating
> the only purpose of adding them.
>
> I guess the misalignement could be cured by adding dummy words, dummy banks, dummy bank
> headers, etc.
>
> I figure this problem dates all the way bank where alignement to 16-bits was just getting important.
> Today, in the VME word, I have to align things on 128-bit boundaries (for 2eSST 2x2 DWORD transfers).
>
> So back to your question, what advantage do you see in using a QWORD bank instead of putting the
> same data in a DWORD bank?
>
> K.O.
Thanks very much for your reply. I have implemented your suggestion of treating the 64-bit array as a 32-bit
array for the bank write/read and this solution is working for me.
Thanks again for your help. |
1157
|
28 Jan 2016 |
Konstantin Olchanski | Suggestion | script command limited to 256 characters; remove limit? | Thank you for reporting this problem:
a) ODB key *names* are restricted to 31 characters (32 bytes, last byte is a NUL), not 256 characters.
b) ODB string length is unlimited (32-bit length field)
c) ODB C API "db_get_value" & co require fixed length buffer and most users of this API provide a 256-byte fixed buffer for strings, some of them also do not
check the status code, resulting in silent truncation. (I think the ODB functions themselves report truncation to midas.log, so not completely silent).
We try to fix this where we must - but it is cumbersome with the current ODB API - as in your fix on has to:
- get the ODB key, extract size
- allocate buffer
- call db_get_value() & co
- use the data
- remember to free the buffer on each and every return path
The first three steps could become one if we had an ODB "get_data" function that automatically allocated the data buffer.
But the main source of bugs will be the last step - remember to free the buffer, always.
P.S.
We are not alone in pondering how to do this best. If you want to see it "done right",
read the fresh-off-the-presses book "Go Programming Language" by Alan Donovan and Brian Kernighan,
http://www.gopl.io/
Brian Kernighan is the "K" in K&R "C programming language", still around and kicking, now at Google.
Sadly the "R" passed away in 2011 - http://www.nytimes.com/2011/10/14/technology/dennis-ritchie-programming-trailblazer-dies-at-70.html
K.O.
> Both the /Script and /CustomScript trees in the ODB allow users to trigger a
> script via Midas - which silently truncates command strings longer than
> 256 characters.
>
> I'd prefer that Midas place no limit on string length. Failing that, it would be
> helpful to have character limits called out in the documentation
> (https://midas.triumf.ca/MidasWiki/index.php//Script_ODB_tree#.3Cscript-name.3E_key_or_subtree,
> https://midas.triumf.ca/MidasWiki/index.php//Customscript_ODB_tree).
>
> As far as I can tell, odb.c allows arbitrarily large strings in the ODB data.
> (Although key *names* are restricted to 256 characters.) I've submitted one
> possible version of an arbitrary-length exec_script() as a pull request
> (https://bitbucket.org/tmidas/midas/pull-requests/).
>
> Am I misunderstanding any critical pieces? Does Midas intentionally treat
> strings in the ODB as limited to 256 characters? |
1158
|
28 Jan 2016 |
Amy Roberts | Suggestion | script command limited to 256 characters; remove limit? | Using low-level memory allocation routines in higher-level programs like mhttpd makes me nervous.
We could use vector arrays to allow variable-sized allocation, and use the data() member function to access the char* needed for functions like strlcat,
db_get_data, and db_sprintf.
This conforms to the c++ standard, but doesn't require explicit freeing by the user - at least, not when you're allocating std::vector<char>.
Amy
> Thank you for reporting this problem:
>
> a) ODB key *names* are restricted to 31 characters (32 bytes, last byte is a NUL), not 256 characters.
> b) ODB string length is unlimited (32-bit length field)
> c) ODB C API "db_get_value" & co require fixed length buffer and most users of this API provide a 256-byte fixed buffer for strings, some of them also do not
> check the status code, resulting in silent truncation. (I think the ODB functions themselves report truncation to midas.log, so not completely silent).
>
> We try to fix this where we must - but it is cumbersome with the current ODB API - as in your fix on has to:
> - get the ODB key, extract size
> - allocate buffer
> - call db_get_value() & co
> - use the data
> - remember to free the buffer on each and every return path
>
> The first three steps could become one if we had an ODB "get_data" function that automatically allocated the data buffer.
>
> But the main source of bugs will be the last step - remember to free the buffer, always.
>
> P.S.
>
> We are not alone in pondering how to do this best. If you want to see it "done right",
> read the fresh-off-the-presses book "Go Programming Language" by Alan Donovan and Brian Kernighan,
> http://www.gopl.io/
>
> Brian Kernighan is the "K" in K&R "C programming language", still around and kicking, now at Google.
> Sadly the "R" passed away in 2011 - http://www.nytimes.com/2011/10/14/technology/dennis-ritchie-programming-trailblazer-dies-at-70.html
>
> K.O.
>
> > Both the /Script and /CustomScript trees in the ODB allow users to trigger a
> > script via Midas - which silently truncates command strings longer than
> > 256 characters.
> >
> > I'd prefer that Midas place no limit on string length. Failing that, it would be
> > helpful to have character limits called out in the documentation
> > (https://midas.triumf.ca/MidasWiki/index.php//Script_ODB_tree#.3Cscript-name.3E_key_or_subtree,
> > https://midas.triumf.ca/MidasWiki/index.php//Customscript_ODB_tree).
> >
> > As far as I can tell, odb.c allows arbitrarily large strings in the ODB data.
> > (Although key *names* are restricted to 256 characters.) I've submitted one
> > possible version of an arbitrary-length exec_script() as a pull request
> > (https://bitbucket.org/tmidas/midas/pull-requests/).
> >
> > Am I misunderstanding any critical pieces? Does Midas intentionally treat
> > strings in the ODB as limited to 256 characters? |
1159
|
05 Feb 2016 |
Thomas Lindner | Suggestion | reducing sleep time in mhttpd main loop (for sequencer) | There were some complaints that the MIDAS sequencer was slow. Specifically, the
complaint was that even lines in the sequence that didn't do any (like COMMENT
commands) tooks > 100ms to execute. These slow sequencer steps could be a
little annoying if a script had to change a large number of ODB variables before
starting.
I tested this a little using a trivial sequence; note that I did all tests using
mhttpd with mongoose enabled on a newer macbook pro. I found that with the
mongoose server each line in a sequencer script was taking ~100ms. This is
consistent with the loop in the main thread, which is only doing a cm_yield and
a sleep:
while (!_abort) {
status = ss_mutex_wait_for(request_mutex, 0);
status = cm_yield(0);
if (status == RPC_SHUTDOWN)
break;
sequencer();
status = ss_mutex_release(request_mutex);
ss_sleep(100);
}
I tested reducing the sleep to 20ms. As expected, this made the sequencer more
zippy, able to execute ~50 commands per second.
I tried to think what would be downsides to making this change. I think that
the main web communication should not be affected, because that communication is
all handled by the separate mongoose thread.
I checked how much extra CPU was used if the sleep was reduced from 100ms to
20ms. I found that when a sequence was not running the CPU increased from 0% to
0.2% with my change. When a sequence was running the CPU increased from 0.8% to
4% with my change. 4% is a little high, though I'd say still reasonable. I
found that most of the CPU usage was occuring because every call to
'sequencer()' resulted in a call to db_set_record("/Sequencer/State"...). I
guess that making that call 50 times causes the somewhat heavy CPU usage.
I would argue that it would still be worth making that change, so that the
sequencer can be more zippy. |
1160
|
05 Feb 2016 |
Thomas Lindner | Suggestion | reducing sleep time in mhttpd main loop (for sequencer) | > There were some complaints that the MIDAS sequencer was slow. Specifically, the
> complaint was that even lines in the sequence that didn't do any (like COMMENT
> commands) tooks > 100ms to execute. These slow sequencer steps could be a
> little annoying if a script had to change a large number of ODB variables before
> starting.
> ...
> I checked how much extra CPU was used if the sleep was reduced from 100ms to
> 20ms. I found that when a sequence was not running the CPU increased from 0% to
> 0.2% with my change. When a sequence was running the CPU increased from 0.8% to
> 4% with my change. 4% is a little high, though I'd say still reasonable. I
> found that most of the CPU usage was occuring because every call to
> 'sequencer()' resulted in a call to db_set_record("/Sequencer/State"...). I
> guess that making that call 50 times causes the somewhat heavy CPU usage.
One additional point: I think that it would be reasonably simple to reduce this CPU
usage even while a sequence was going on. I would guess that for many sequences a
lot of time was spent in a 'WAIT SECONDS' command, since you would presumably want
to wait while data was being taken or conditions stabilizing. I think that if you
are in a 'WAIT SECONDS' command that hasn't been satisfied then there probably isn't
any reason to do the db_set_record at the end of the sequencer() method. |
1161
|
06 Feb 2016 |
Stefan Ritt | Suggestion | reducing sleep time in mhttpd main loop (for sequencer) | > There were some complaints that the MIDAS sequencer was slow. Specifically, the
> complaint was that even lines in the sequence that didn't do any (like COMMENT
> commands) tooks > 100ms to execute. These slow sequencer steps could be a
> little annoying if a script had to change a large number of ODB variables before
> starting.
>
> I tested this a little using a trivial sequence; note that I did all tests using
> mhttpd with mongoose enabled on a newer macbook pro. I found that with the
> mongoose server each line in a sequencer script was taking ~100ms. This is
> consistent with the loop in the main thread, which is only doing a cm_yield and
> a sleep:
>
> while (!_abort) {
> status = ss_mutex_wait_for(request_mutex, 0);
> status = cm_yield(0);
> if (status == RPC_SHUTDOWN)
> break;
> sequencer();
> status = ss_mutex_release(request_mutex);
> ss_sleep(100);
> }
>
> I tested reducing the sleep to 20ms. As expected, this made the sequencer more
> zippy, able to execute ~50 commands per second.
>
> I tried to think what would be downsides to making this change. I think that
> the main web communication should not be affected, because that communication is
> all handled by the separate mongoose thread.
>
> I checked how much extra CPU was used if the sleep was reduced from 100ms to
> 20ms. I found that when a sequence was not running the CPU increased from 0% to
> 0.2% with my change. When a sequence was running the CPU increased from 0.8% to
> 4% with my change. 4% is a little high, though I'd say still reasonable. I
> found that most of the CPU usage was occuring because every call to
> 'sequencer()' resulted in a call to db_set_record("/Sequencer/State"...). I
> guess that making that call 50 times causes the somewhat heavy CPU usage.
>
> I would argue that it would still be worth making that change, so that the
> sequencer can be more zippy.
The minimal time slice on most systems is 10 ms, and nothing prevents us from switching to
that. The original 100 ms was more for the fact that you can see the sequencer statements
executed one after the other (with the color bar). But this is more a "debugging" feature which
we not really need.
To do it "right" the sequencer would have to _return_ a sleep time. Like if it is in a wait loop (as
most of the time), the sleep time could be close to 1 second, to correctly update the wait
progress bar. If the sequencer executes ODB set statements, the wait time could be zero, so
thousands of statements can be executed in one second. The problem we will then have of course
that the sequencer will block the "request_mutex" almost always, which would prevent the
mongoose server from serving anything. So this should be carefully tested. It could be (on most OS)
that releasing the mutex by the main loop immediately switches to the mongoose thread, which would
make the web server still quite responsive, but I'm not sure about that. So as a first change making
the sleep time 10ms should be fine.
Stefan |
1162
|
15 Feb 2016 |
Thomas Lindner | Suggestion | reducing sleep time in mhttpd main loop (for sequencer) |
> > I checked how much extra CPU was used if the sleep was reduced from 100ms to
> > 20ms. I found that when a sequence was not running the CPU increased from 0% to
> > 0.2% with my change. When a sequence was running the CPU increased from 0.8% to
> > 4% with my change. 4% is a little high, though I'd say still reasonable. I
> > found that most of the CPU usage was occuring because every call to
> > 'sequencer()' resulted in a call to db_set_record("/Sequencer/State"...). I
> > guess that making that call 50 times causes the somewhat heavy CPU usage.
> >
> > I would argue that it would still be worth making that change, so that the
> > sequencer can be more zippy.
>
> The minimal time slice on most systems is 10 ms, and nothing prevents us from switching to
> that. The original 100 ms was more for the fact that you can see the sequencer statements
> executed one after the other (with the color bar). But this is more a "debugging" feature which
> we not really need.
OK, I made this change; sleep is now 10ms on main thread. Seems to work fine on SL6 and MacOS.
> To do it "right" the sequencer would have to _return_ a sleep time. Like if it is in a wait loop (as
> most of the time), the sleep time could be close to 1 second, to correctly update the wait
> progress bar. If the sequencer executes ODB set statements, the wait time could be zero, so
> thousands of statements can be executed in one second. The problem we will then have of course
> that the sequencer will block the "request_mutex" almost always, which would prevent the
> mongoose server from serving anything. So this should be carefully tested. It could be (on most OS)
> that releasing the mutex by the main loop immediately switches to the mongoose thread, which would
> make the web server still quite responsive, but I'm not sure about that. So as a first change making
> the sleep time 10ms should be fine.
Hmm, yeah, I'm not sure about how to handle reducing the wait time to zero after ODB set commands.
But it does seem like it would be straight-forward to increase the sleep time for waits; I'll look into
a clean way of doing that. |
1163
|
15 Feb 2016 |
Stefan Ritt | Suggestion | reducing sleep time in mhttpd main loop (for sequencer) | > Hmm, yeah, I'm not sure about how to handle reducing the wait time to zero after ODB set commands.
>
> But it does seem like it would be straight-forward to increase the sleep time for waits; I'll look into
> a clean way of doing that.
Let's see how your 10 ms work in real life. If we need variable wait times, I can implement this for your without much effort.
Stefan |
1166
|
26 Feb 2016 |
Konstantin Olchanski | Suggestion | script command limited to 256 characters; remove limit? | > Using low-level memory allocation routines in higher-level programs like mhttpd makes me nervous.
It should not, people have used malloc() for decades now without much injury to themselves. (Thomas corrects me: some people had big injury to their pride, me included).
> We could use vector arrays to allow variable-sized allocation, and use the data() member function to access the char* needed for functions like strlcat,
> db_get_data, and db_sprintf.
I thought auto_ptr was the correct tool to allocate "I just need a few bytes for a few minutes" arrays, but there is some discrepancy
between delete and delete[] (with brackets) and auto_ptr p(new char[i]) is verboten (even though it compiles just fine).
I ended up writing a custom replacement for auto_ptr called auto_string - now in mhttpd.cxx available for use in other places like this.
Still I think a db_get_data() that returns allocated memory is the correct solution. But this memory still needs to be released and lacking auto_ptr it opens the door for memory leaks.
> This conforms to the c++ standard, but doesn't require explicit freeing by the user - at least, not when you're allocating std::vector<char>
I do not think std::vector<char> can be cast into "char*" and used as replacement of "char str[100]" or "char* str = malloc(i);"
In other new, the limit on the command length is now removed.
K.O.
>
> Amy
>
> > Thank you for reporting this problem:
> >
> > a) ODB key *names* are restricted to 31 characters (32 bytes, last byte is a NUL), not 256 characters.
> > b) ODB string length is unlimited (32-bit length field)
> > c) ODB C API "db_get_value" & co require fixed length buffer and most users of this API provide a 256-byte fixed buffer for strings, some of them also do not
> > check the status code, resulting in silent truncation. (I think the ODB functions themselves report truncation to midas.log, so not completely silent).
> >
> > We try to fix this where we must - but it is cumbersome with the current ODB API - as in your fix on has to:
> > - get the ODB key, extract size
> > - allocate buffer
> > - call db_get_value() & co
> > - use the data
> > - remember to free the buffer on each and every return path
> >
> > The first three steps could become one if we had an ODB "get_data" function that automatically allocated the data buffer.
> >
> > But the main source of bugs will be the last step - remember to free the buffer, always.
> >
> > P.S.
> >
> > We are not alone in pondering how to do this best. If you want to see it "done right",
> > read the fresh-off-the-presses book "Go Programming Language" by Alan Donovan and Brian Kernighan,
> > http://www.gopl.io/
> >
> > Brian Kernighan is the "K" in K&R "C programming language", still around and kicking, now at Google.
> > Sadly the "R" passed away in 2011 - http://www.nytimes.com/2011/10/14/technology/dennis-ritchie-programming-trailblazer-dies-at-70.html
> >
> > K.O.
> >
> > > Both the /Script and /CustomScript trees in the ODB allow users to trigger a
> > > script via Midas - which silently truncates command strings longer than
> > > 256 characters.
> > >
> > > I'd prefer that Midas place no limit on string length. Failing that, it would be
> > > helpful to have character limits called out in the documentation
> > > (https://midas.triumf.ca/MidasWiki/index.php//Script_ODB_tree#.3Cscript-name.3E_key_or_subtree,
> > > https://midas.triumf.ca/MidasWiki/index.php//Customscript_ODB_tree).
> > >
> > > As far as I can tell, odb.c allows arbitrarily large strings in the ODB data.
> > > (Although key *names* are restricted to 256 characters.) I've submitted one
> > > possible version of an arbitrary-length exec_script() as a pull request
> > > (https://bitbucket.org/tmidas/midas/pull-requests/).
> > >
> > > Am I misunderstanding any critical pieces? Does Midas intentionally treat
> > > strings in the ODB as limited to 256 characters? |
1183
|
06 Jul 2016 |
Zhe Wang | Suggestion | Frontend crush on high event rate | Dear friends,
We have some questions on using midas.
We use a Caen digitizer V1751 to take waveforms.
When testing with caen provided programs, we roughly know it can work fine at 1000 Hz event rate, and 30 M/s data can be written to disk.
The test with Midas, however, is a little confusing. We use CAENDigitizer library with Midas. First, it works, data were taken, and there seems no error.
The only problem is we cannot go to a higher event rate, for example we can only work on a rate of 40 Hz, and only 3 M/s data recording. Otherwise it will crush.
We may miss something really simple. Would you please give some suggestions? for example, other people's discussions or documents?
Thank you very much. |
1184
|
09 Jul 2016 |
Zhe Wang | Suggestion | Frontend crush on high event rate | Dear friends,
I may add a little more information.
For polling event, we check the data-ready register for the status of the digitizer.
In the readout routine, we create a bank, readout the data and write it out.
We commented out or made some replacement for each part of the subroutines to figure our where exactly goes wrong.
for example, replace the readout from the digitizer with a random generation of some fake events.
By replacing the readout by a random generation, the program runs fine and reach a very high event rates.
Any suggestions or ideas from experts?
Thank you very much.
--
Best regards,
Zhe Wang
> Dear friends,
>
> We have some questions on using midas.
> We use a Caen digitizer V1751 to take waveforms.
> When testing with caen provided programs, we roughly know it can work fine at 1000 Hz event rate, and 30 M/s data can be written to disk.
> The test with Midas, however, is a little confusing. We use CAENDigitizer library with Midas. First, it works, data were taken, and there seems no error.
> The only problem is we cannot go to a higher event rate, for example we can only work on a rate of 40 Hz, and only 3 M/s data recording. Otherwise it will crush.
>
> We may miss something really simple. Would you please give some suggestions? for example, other people's discussions or documents?
>
> Thank you very much. |
|