ID |
Date |
Author |
Topic |
Subject |
2380
|
31 Mar 2022 |
Stefan Ritt | Suggestion | Maximum ODB size | Anybody some idea what the maximum ODB size can be? In the old days, the linux
kernels had a severe limit on shared memory of usually 8MB, but in the age of
64GB RAM being a standard, we should be able to grow bigger. Tried
odbinit -s 1024MB --cleanup
which went through without complain, even put that value in to .ODB_SIZE.TXT, but
when I started odbedit doing "mem", I only see a size of 1MB. Probably somewhere
deep inside we have a limit which prevents the user to create very large ODBs,
but this should be mentioned more prominently in odbinit. Like "size too large,
maximum allowd is xxx MB".
Stefan |
2381
|
04 Apr 2022 |
Konstantin Olchanski | Suggestion | Maximum ODB size | > Anybody some idea what the maximum ODB size can be?
It turns out ODB size limit is hardwired on db_open_database() at 100 Mbytes.
I now committed an improved error message for this.
I confirm that "odbinit -s 100MB" works and creates ODB with 50 Mbyte data area and 50
Mbyte key area.
> in the age of 64GB RAM being a standard, we should be able to grow bigger ...
I agree, I think we can safely bump the limit from 100 Mbytes to 1 Gbyte, maybe 1.5 or
1.99 Gbytes. Above that we run into 32-bit/31-bit cleanliness problems.
And creating extra large 1 GB ODB but using only a few megabytes will not waste any
RAM, because the .ODB.SHM file is demand-paged and non-used parts of ODB will not be
mapped into RAM. (It will waste disk space, file .ODB.SHM will be 1 GByte size).
However, 1 GByte (FPGA based) and 4-8 GByte (Raspberry Pi & co) machines are again
becoming popular and relevant for running MIDAS, and they have very slow "disk"
subsystems, with NAND, SD and USB flash, so we should not go crazy here.
> odbinit -s 1024MB --cleanup
there is a bug in odbinit, if initial odbinit fails, ODB with default size is creates,
and original rejected ODB size is written to .ODB_SIZE.TXT (an inconsistency).
bitbucket bug 328
> [ how do I resize ODB ??? ]
we need odbresize. bitbucket bug 329.
K.O. |
2476
|
27 Apr 2023 |
Marius Koeppel | Suggestion | Maximum ODB size | Hi all,
> I agree, I think we can safely bump the limit from 100 Mbytes to 1 Gbyte, maybe 1.5 or
> 1.99 Gbytes. Above that we run into 32-bit/31-bit cleanliness problems.
We just went in and changed: int odb_size_limit = INT_MAX;//100*1000*1000; in odb.cxx. And we could create ODBs with 1GB and 1.5 GB.
Since the DecodeSize function in odbinit has also foreseen yottabytes ;) (const char units[] = {'k', 'M', 'G', 'T', 'P', 'E', 'Z', 'Y'};) we think going to GB for the maximum ODB size would be create.
> there is a bug in odbinit, if initial odbinit fails, ODB with default size is creates,
> and original rejected ODB size is written to .ODB_SIZE.TXT (an inconsistency).
Can#t we go with the maximum size here if the user inputs a larger size? So just below printf("Checking ODB size...\n"); one could check for the odb_size_limit. In general one could move the odb_size_limit to midas.h so its not only available in odb.cxx.
Best,
Marius |
2477
|
27 Apr 2023 |
Konstantin Olchanski | Suggestion | Maximum ODB size | > > I agree, I think we can safely bump the limit from 100 Mbytes to 1 Gbyte, maybe 1.5 or
> > 1.99 Gbytes. Above that we run into 32-bit/31-bit cleanliness problems.
>
> We just went in and changed: int odb_size_limit = INT_MAX;//100*1000*1000; in odb.cxx.
>
This is change is wrong. As I wrote, ODB is not 64-bit clean and it is not 32-bit clean. We think is is 31-bit clean, so maximum size would be slightly less than 2 Gbytes.
> And we could create ODBs with 1GB and 1.5 GB.
Congratulations. created != "it works". for proper test, you should fill it with 1.5 GB of stuff, save to json file, reload from json file, save to a different json file and compare that they have same contents (minus timestamps).
We could spend a lot of time making odb 32-bit clean and give you 4GB-max ODB, but would it be useful? For large ODB, "save to .json" already takes a long time ("save to .xml" is slower, "save to .odb" ditto, also buggy). We already have complaints that runs take forever to start because mlogger
takes a long time to write the ODB save file.
P.S. 64-bit clean ODB will be binary incompatible, all internal pointers are 32-bit right now.
K.O. |
2480
|
27 Apr 2023 |
Marius Koeppel | Suggestion | Maximum ODB size | > This is change is wrong. As I wrote, ODB is not 64-bit clean and it is not 32-bit clean. We think is is 31-bit clean, so maximum size would be slightly less than 2 Gbytes.
I just wanted to show that changing it and creating bigger ODBs is in general possible.
My main intention was to trigger the discussion again. I also think in general 1GB is enough. But for our applications sometimes 100MB is just on the edge.
> Congratulations. created != "it works". for proper test, you should fill it with 1.5 GB of stuff, save to json file, reload from json file, save to a different json file and compare that they have same contents (minus timestamps).
You’re right we did not properly test it. I will run this test with a 1GB ODB.
> We could spend a lot of time making odb 32-bit clean and give you 4GB-max ODB, but would it be useful? For large ODB, "save to .json" already takes a long time ("save to .xml" is slower, "save to .odb" ditto, also buggy). We already have complaints that runs take forever to start because mlogger
> takes a long time to write the ODB save file.
I also agree that going in and making it 32-bit or even 64-bit clean is not worth the effort.
Also concerning the writing speed of the logger etc I am fully with you.
However, having the freedom to choose a bit bigger ODB would be great.
You said the writing into .odb is buggy. Do you mean it’s buggy in general or only in this specific case?
We save the ODB most of the time in the .odb format.
Cheers,
Marius |
2481
|
27 Apr 2023 |
Konstantin Olchanski | Suggestion | Maximum ODB size | > You said the writing into .odb is buggy. Do you mean it’s buggy in general or only in this specific case?
> We save the ODB most of the time in the .odb format.
I recommend JSON. Main advantage is you can read it using JSON decoder available for any language, no need to write custom code.
Other than that, the main issue is encoding of strings. For ODB this is key names and string values.
JSON was the first to standardize escape characters what can encode all valid UTF-8 UNICODE strings,
the system of escape characters is clean, easy to understand and easy to implement. https://www.json.org/json-en.html
XML is not as well defined as JSON, i.e. go and try to find the XML BNF grammar. I am not sure if the MIDAS XML encoder
and decoder is fully UTF-8 clean, and if some unlucky combinations of characters break string encoding or decoding. This
is usually tested using a fuzzer (generates all possible, unlucky and unlikely string values). Most suspicious
would be quotes, and square and angle brackets. If some character combinations break encoding or decoding, likely
this cannot be fixed in MIDAS without breaking backwards self-compatibility (will not read old ODB files correctly).
Same applies for the ODB format, except that it is even more ad-hoc. Again, any problems are hard to fix without
breaking backward self-compatibility.
In addition, in the past, the ODB and XML decoders had trouble with very long strings, this has been
fixed some time ago.
K.O. |
2482
|
27 Apr 2023 |
Konstantin Olchanski | Suggestion | Maximum ODB size | my vote is to bump the ODB size limit to 1999*1000*1000 (not quite 2GB). but this needs to be tested. especially save and restore from ODB, XML and JSON files, including how long it takes to save and load a 1.9GB ODB. K.O. |
2483
|
27 Apr 2023 |
Stefan Ritt | Suggestion | Maximum ODB size | > Congratulations. created != "it works".
Two other tings to consider:
1) The ODB shared memory is dumped into a binary file (".ODB.SHM") after the last client finished and read if the first client starts, to get it persistent.
So this could slow down starting and stopping, but only the first client, so I guess it's not an issue.
2) Traditionally, the ODB gets dumped to the .mid file at the beginning and end of every run, so that one know the exact configuration of the experiment
for offline analysis. This can be turned off of course, but most experiments use it. If the ODB is dumped in any ASCII format, this can take quite long.
Assume it takes 10 seconds at the beginning of each run, and we take a run every five minutes. Then we loose 48 mins of precious beam time every day.
Best,
Stefan |
2485
|
28 Apr 2023 |
Marius Koeppel | Suggestion | Maximum ODB size | > my vote is to bump the ODB size limit to 1999*1000*1000 (not quite 2GB). but this needs to be tested. especially save and restore from ODB, XML and JSON files, including how long it takes to save and load a 1.9GB ODB. K.O.
I had some fun with python and created a test script which can be executed in the MIDASSYS/online folder (test_odb.py). I did not really normalize the time so it will be different at different systems but I guess the trend is important (see create_time.pdf).
What is surprising to me is that even that I only write one STRING key to the time increases. Is this maybe related to what Stefan said about the run start - so that odbedit needs some time to load the bigger ODB?
Second thing is that also the creation / storing and load time is increasing. Should this be or is there a bug in the code I use or again is this related to the previous point?
The test of comparing the ODB after store / load / store already fails for the json format. I know I only test if the dicts are the same, so for timestamps this already fails.
But what is strange here is that sometimes the test works sometimes not and its different from run to run.
I will try to improve the test a bit more but for a short update this is how it looks so fare.
Best,
Marius |
2486
|
28 Apr 2023 |
Stefan Ritt | Suggestion | Maximum ODB size | > Is this maybe related to what Stefan said about the run start - so that odbedit needs some time to load the bigger ODB?
At the run start mlogger writes the ODB to the .mid file. This needs conversion (binary ODB -> XML ASCII) which can take time.
This does NOT depend on the ODB size, but on the ODB *content*. Every key in the ODB takes time to convert. So if your ODB as 1.5 GB
but only a few keys, this is still fast. Only if you have 200 million keys int he ODB, then mlogger takes lots of time to convert
200 million values to XML or JSON strings.
Stefan |
2487
|
28 Apr 2023 |
Marius Koeppel | Suggestion | Maximum ODB size | > At the run start mlogger writes the ODB to the .mid file. This needs conversion (binary ODB -> XML ASCII) which can take time.
> This does NOT depend on the ODB size, but on the ODB *content*. Every key in the ODB takes time to convert. So if your ODB as 1.5 GB
> but only a few keys, this is still fast. Only if you have 200 million keys int he ODB, then mlogger takes lots of time to convert
> 200 million values to XML or JSON strings.
This was also my assumption. Is this the same for odbedit -c save FILE?
Because this is what I tested with the script and there one can see in the plot that the time increases to write the file if the ODB size increases.
The content of the ODB is always the same - one STRING key in the directory Test.
Best,
Marius |
2488
|
28 Apr 2023 |
Konstantin Olchanski | Suggestion | Maximum ODB size | > > Congratulations. created != "it works".
>
> Two other tings to consider:
>
> 1) The ODB shared memory is dumped into a binary file (".ODB.SHM") after the last client finished and read if the first client starts, to get it persistent.
> So this could slow down starting and stopping, but only the first client, so I guess it's not an issue.
>
typical disk writing speed is 100-1000 Mbytes/sec, so writing 1 GB .ODB.SHM will take 1-10 seconds. NFS over 1gige network is 100 Mbytes/sec, so 10 seconds to
write .ODB.SHM. embedded ARM write speed to SD flash can be as low as 10 Mbytes/sec, so up to 100 seconds.
>
> 2) Traditionally, the ODB gets dumped to the .mid file at the beginning and end of every run, so that one know the exact configuration of the experiment
> for offline analysis. This can be turned off of course, but most experiments use it. If the ODB is dumped in any ASCII format, this can take quite long.
> Assume it takes 10 seconds at the beginning of each run, and we take a run every five minutes. Then we loose 48 mins of precious beam time every day.
>
new default is to save as JSON, (as of my last measurement) JSON encoder is faster than the XML (and ODB?) encoder, by default result is compressed by GZIP-1 (66
Mbytes/sec is my old benchmark, should remeasure on new DDR5 machines), compressed JSON is written .mid.gz file at disk speed (as above). Alternatively, use LZ4
compression, runs roughly at memcpy() speed, less compression, written to .mid.lz4 at disk speed.
if data storage is ZFS, ZFS built-in LZ4 compression is now enabled by default, so result writing uncompressed .mid file (no compression of ODB dump), should be
roughly same as when using MIDAS LZ4 compression and writing .mid.lz4.
bottom line, I need to remeasure gzip and lz4 compression speeds on new computers (DDR4 AMD 5000 series and DDR5 AMD 7000 series).
K.O. |
2489
|
28 Apr 2023 |
Konstantin Olchanski | Suggestion | Maximum ODB size | > > Is this maybe related to what Stefan said about the run start - so that odbedit needs some time to load the bigger ODB?
>
> At the run start mlogger writes the ODB to the .mid file. This needs conversion (binary ODB -> XML ASCII) which can take time.
> This does NOT depend on the ODB size, but on the ODB *content*.
>
Yes and no. They must be storing more than 100 Mbytes of stuff in ODB, if they are asking to bump ODB size from 100 Mbyte to 2 GByte ODB.
On the MIDAS, side, though, we have to plan for the worst case, if max ODB size 1.9 GB and it is full of data,
and mlogger (and odbedit save and load) take 10-30 seconds, then at least all timeouts (watchdog timeout, RPC timeout, etc)
must be increased accordingly.
K.O. |
2525
|
09 Jun 2023 |
Konstantin Olchanski | Suggestion | Maximum ODB size | > > 1) The ODB shared memory is dumped into a binary file (".ODB.SHM") after the last client finished ...
correction: ODB shared memory is saved to .ODB.SHM each time a client stops, this is db_close_database().
I have just run into a problem with this in the DRAGON experiment. At begin and end of run they run
a script that does a large number of "odbedit" calls to read stuff from ODB and it was taking a very long time.
Each odbedit invocation was taking about 1 second, starting odbedit is quick, stopping odbedit takes about 1 second.
It turns out each invocation of odbedit saves .ODB.SHM, ODB was 100 Mbytes size, home disk is an HDD (~100-200 Mbytes/sec writing speed), so yes, about 1 second to
stop odbedit.
Solution was to reduce ODB size from 100 Mbytes to 10 Mbytes, odbedit now run quickly, begin and end of run scripts run quickly. problem solved.
K.O.
P.S. no, I am not the dragon experiment, no, I did not write those scripts, no, I will not rewrite them, persons who wrote them are long gone, no, the persons running
dragon today will not be rewriting them. |
2527
|
12 Jun 2023 |
Stefan Ritt | Suggestion | Maximum ODB size | > correction: ODB shared memory is saved to .ODB.SHM each time a client stops, this is db_close_database().
The original design of the midas shared memory (back in the 1990's) was that the ODB shared memory file gets
only saved into the .ODB.SHM when the *last* client exits. This ensures to keep the ODB persistent when the
shared memory gets deleted. I vaguely remember I put something in like:
db_close_database()
...
destroy_flag = (pheader->num_clients == 0);
if (destroy_flag)
ss_shm_flush(pheader->name, pdb->shm_adr, pdb->shm_size, pdb->shm_handle);
...
Now I see that the "if (destory_flag)" is missing. Not sure if it was removed once, or if it actually never
was there. But I see no point in flushing the ODB when a client ends. We need the flushing only before the
shared memory gets deleted. We we have to ensure that the share memory and the binary dump file stay in sync
(like if all midas clients die at the same time), we could add some code to flush the ODB like once per minute,
but not attach it to db_close_database(). I know several experiments using "odbedit -c xxx" in vast quantities,
so all these experiments would then benefit.
Note: Mu3e at PSI also uses 100 MB ODB, and they really need it.
Thoughts and opinions?
Best,
Stefan |
2528
|
12 Jun 2023 |
Konstantin Olchanski | Suggestion | Maximum ODB size | > > correction: ODB shared memory is saved to .ODB.SHM each time a client stops, this is db_close_database().
>
> The original design of the midas shared memory (back in the 1990's) was that the ODB shared memory file gets
> only saved into the .ODB.SHM when the *last* client exits. This ensures to keep the ODB persistent when the
> shared memory gets deleted. I vaguely remember I put something in like:
>
> db_close_database()
> ...
> destroy_flag = (pheader->num_clients == 0);
>
> if (destroy_flag)
> ss_shm_flush(pheader->name, pdb->shm_adr, pdb->shm_size, pdb->shm_handle);
I remember the same, but I tracked it down in git to the very first commit, and there is no if() there,
odb is saved to .ODB.SHM on every client shutdown, not just the last client. I guess we both misremebered.
What's more, ss_shm_flush() is done while holding the ODB semaphore, so all other midas programs that try to access
odb at the same time (including the mserver) will stall until write() and close() return. at least we do not fsync(),
and there is no waiting until data is committed to physical media.
$ git annotate 3bb04af4d^ src/odb.c
...
ef8320177 (Stefan Ritt 1998-10-08 13:46:02 +0000 875) destroy_flag = (pheader->num_clients == 0);
ef8320177 (Stefan Ritt 1998-10-08 13:46:02 +0000 876)
ef8320177 (Stefan Ritt 1998-10-08 13:46:02 +0000 877) /* flush shared memory to disk */
ef8320177 (Stefan Ritt 1998-10-08 13:46:02 +0000 878) ss_flush_shm(pheader->name, pheader, sizeof(DATABASE_HEADER)+2*pheader->data_size);
ef8320177 (Stefan Ritt 1998-10-08 13:46:02 +0000 879)
ef8320177 (Stefan Ritt 1998-10-08 13:46:02 +0000 880) /* unmap shared memory, delete it if we are the last */
ef8320177 (Stefan Ritt 1998-10-08 13:46:02 +0000 881) ss_close_shm(pheader->name, pheader,
ef8320177 (Stefan Ritt 1998-10-08 13:46:02 +0000 882) _database[hDB-1].shm_handle, destroy_flag);
...
K.O. |
2529
|
13 Jun 2023 |
Stefan Ritt | Suggestion | Maximum ODB size | > I remember the same, but I tracked it down in git to the very first commit, and there is no if() there,
> odb is saved to .ODB.SHM on every client shutdown, not just the last client. I guess we both misremebered.
I confirm. Really strange how your mind can trick you. I'm absolutely sure I had this planned originally (1995?), but it got never implemented.
Well, never too late. So I added the "if" and committed to develop. I did a quick test and things seem to work fine here. Actually programs stop
a bit faster now. So please everybody give it a try and report back here.
BTW, how do I resize the ODB. I remember we discussed this some time ago, and concluded that odbedit needs a resize flag. Has this even been
done? If not, what is the "official" way to resize the ODB. We had some documentation about that some time ago, but I can't find it anymore.
Stefan |
2530
|
13 Jun 2023 |
Marius Koeppel | Suggestion | Maximum ODB size |
> BTW, how do I resize the ODB. I remember we discussed this some time ago, and concluded that odbedit needs a resize flag. Has this even been
> done? If not, what is the "official" way to resize the ODB. We had some documentation about that some time ago, but I can't find it anymore.
I guess this is still not done and the issue is still open: https://bitbucket.org/tmidas/midas/issues/329/need-odbresize
I guess if we touch this maybe the problem with the wrong size should be also fixed: https://bitbucket.org/tmidas/midas/issues/328/odbinit-s-1024mb-creates-odb-with-wrong
Best,
Marius |
2535
|
13 Jun 2023 |
Konstantin Olchanski | Suggestion | Maximum ODB size | > > I remember the same, but I tracked it down in git to the very first commit, and there is no if() there,
> > odb is saved to .ODB.SHM on every client shutdown, not just the last client. I guess we both misremebered.
small problem. build an experiment, start taking data, observe how ODB is never saved to disk because the "last client" never stops. as bonus, crash
the computer, observe how all changes to ODB are now lost. if mlogger is configured to save odb.json at the end of run, and to write ODB dumps at
begin and end of every data file, you can recover some of the lost data.
for better effect, ODB should be dumped to disk at periodic intervals. but. current implementation writes odb to disk while holding the ODB
semaphore, which means all ODB access stops for the duration, specifically, there will be gaps in the history because mlogger cannot read history
data from ODB.
a better implementation could take the ODB lock, make a copy of ODB shared memory, release the ODB lock, complete writing to disk without holding the
lock. protection is needed against 100 midas programs trying to do this all at the same time. computers with 0.5 GB RAM (many ARM FPGA SoCs) will be
limited to ~100 Mbyte ODB). plus deal with memory allocation failures when taking a copy of a 2GB ODB.
in theory, the mmap() shared memory (already implemented in midas) does this automatically, but we lose control
over disk writes, we see some OSes write odb to disk "too often" and at wrong times, i.e. while we are in the middle
of creating or deleting something. current sequence of open(), atomic write() and close() ensures ODB.SHM always
contains a valid odb. (minus loss of OS and disk caches to crash or power loss).
K.O. |
2536
|
13 Jun 2023 |
Konstantin Olchanski | Suggestion | Maximum ODB size | > > BTW, how do I resize the ODB.
ODB cannot be resized "online". Everything has to stop, save content to odb.json, get rid of old ODB.SHM, ensure ODB shared memory is destroyed (SysV or POSIX shared memory),
create new ODB with new size, load odb.json. Feel free to punch this into chatgpt > odbresize.cxx, commit, test, push.
> I remember we discussed this some time ago, and concluded that odbedit needs a resize flag.
ODB cannot be resized online. ODB API has ODB clients holding ODB handles which are pointers (offsets) into ODB shared memory.
> Has this even been done?
> I guess this is still not done and the issue is still open: https://bitbucket.org/tmidas/midas/issues/329/need-odbresize
> I guess if we touch this maybe the problem with the wrong size should be also fixed: https://bitbucket.org/tmidas/midas/issues/328/odbinit-s-1024mb-creates-odb-with-wrong
please contribute 14 distraction-free days to my patreon. thanks in advance!
K.O. |
|