multithreaded frontends have an unusual event buffer corruption if the write
cache is enabled. For a long time now I had to disable the write cache on
all multithreaded frontends in alpha-g, I was hitting this bug quite often.
(somehow I do not see this problem reported on bitbucket!)
last week I reworked the multithread locking of event buffers, in hope
that this bug will turn up, but nope, all mutexes and locking look okey,
except for a number of unrelated problems (races against bm_close_buffer()
were the most troublesome to fix).
but finally found the trouble.
first, some background.
because multiprocess locking is expensive, frontends that generate
a large number of small events can use the write cache to reduce
this overhead. instead of locking the shared memory event buffer for
each event, events are accumulated in the write cache, and periodic
calls to bm_flush_buffer() flush them to shared memory. For best effect,
one should increase the size of the write cache until lock rate is around
10/second.
it turns out introduction of multithreading broke bm_flush_cache().
it does this:
- int ask_free = pbuf->wp; // how much data we have in the write cache now
- call bm_wait_for_free_space(ask_free); // ensure we have this much free shared
memory space
- copy pbuf->wp worth if events to shared memory
looks okey at first sight. this is what happens to trigger the bug:
- int ask_free = pbuf->wp; // ok
- call bm_wait_for_free_space(ask_free); // ok, but if shared memory is full, it
will go to sleep waiting for free space
- in the mean time, another thread calls bm_send_event(), this adds more data to
the write cache, moves pbuf->wp
- bm_wait_for_free_space() eventually returns
- copy pbuf->wp worth of data to shared memo KABOOM! shared memory corruption!
we just overwrote some unlucky event in shared memory: we only have "ask_free"
free bytes available, but pbuf->wp moved and now has more data,
and it does not fit, and there is no check against it.
of course in the single threaded world this bug did not exist, there was no
other thread to call bm_send_event() while bm_flush_cache() is sleeping.
the obvious fix is to ask for more free space if cached data does not fit.
this is now implemented on the branch feature/buffer_mutex. after a bit more
tested I will merge it into develop.
so that's it?
not so fast. there was more going on. as described, the bug will only happen
when shared memory event buffer is full. (i.e. rarely or never). It turns
out the old version of thread locking code was defective and permitted
a race between bm_send_event() and bm_send_event() in another thread:
thread 1: while (1) { bm_send_event(very small event); }
thread 2:
-> bm_send_event(very big event)
-> no space in the cache for the very big event, call bm_flush_cache()
-> bm_flush_cache() asks bm_wait_for_free_space() to make space for cached data
-> this was done with write cache mutex released (mistake!)
-> at the same time bm_send_event(very small event) added 1 more small event to
the cache
-> back in bm_flush_cache() write cache mutex is locked correctly, we copy
cached data to shared memory and again KABOOM because we now have more data than
we asked free space for.
So in the original implementation, corruption was possible even when share
memory event buffer was pretty much empty.
The reworked locking code closed that loop hole - bm_flush_cache() is now
called with write cache locked, and bm_send_event() from another thread
cannot confuse things, unless shared memory buffer is full and we go to
sleep inside bm_wait_for_free_space(). And this is now fixed, too.
K.O. |