Commit 370474b6 authored by Sergey Vojtovich's avatar Sergey Vojtovich

Fixup: reshuffle code to avoid going negative

In reply to:
Note that as the above is uint, it can never be < 0, so it's easier to just test == 0
/* Wait for preceding concurrent writes completion */
     while ((uint64_t) my_atomic_load64_explicit((int64*) &cache->cached_eof,
                                                 MY_MEMORY_ORDER_RELAXED) <
            start)
       LF_BACKOFF();
Why wait. Can't we start writing to the beginning of the cache buffer up to the last flushed byte?
Isn't the cache a round-buffer?  From the code it looks like we write to the end always, then flush and then start from the beginning.
hm.. It's probably right that we test for <= 0 above, but we need to cast the full expression to int or just make avail an int64_t
parent f5ae90f8
......@@ -263,15 +263,16 @@ static size_t cache_write(PMEM_APPEND_CACHE *cache, const void *data,
do
{
uint64_t chunk_offset= write_pos % cache->buffer_size;
uint64_t avail;
uint64_t used, avail;
/* Wait for flusher thread to release some space */
while ((avail=
while ((used= write_pos -
(uint64_t) my_atomic_load64_explicit((int64*) &cache->flushed_eof,
MY_MEMORY_ORDER_RELAXED) +
cache->buffer_size - write_pos) <= 0)
MY_MEMORY_ORDER_RELAXED)) >=
cache->buffer_size)
LF_BACKOFF();
avail= cache->buffer_size - used;
if (avail > left)
avail= left;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment