Commit 8c32c9f6 authored by Kirill Smelkov's avatar Kirill Smelkov

bigfile/zodb: FIXME invalidations are not working correctly on blocks topology change

I noticed this while working on WCFS: if file blocks topology change,
the invalidation process is not working correctly. It is also not
correct with respect to live cache pressure.

Add FIXME in the code and test for live cache pressure.

5a4562fc
48eb692f
d1a579b2
69c94fbc
parent d27ade8e
......@@ -196,6 +196,26 @@ class ZBlkBase(Persistent):
# DB notifies this object has to be invalidated
# (DB -> invalidate .blkdata -> invalidate memory-page)
#
# FIXME this assumes that ZBlk always stays associated with #blk, not moved
# and never e.g. deleted from bigfile. However it is NOT correct: e.g
# ZBigFile.storeblk() can change type of stored zblk - i.e. it rewrites
# ZBigFile.blktab[blk] with another ZBlk created anew.
#
# another example: a block may initially have no ZBlk attached (and
# thus it reads as 0). However when data in that block is changed - a
# new ZBlk is created, but information that block data has to be
# invalidated is NOT correctly received by peers.
#
# FIXME this assumes that ghostified ZBlk will never be removed from live
# objects cache (cPickleCache). However, since ZBlk is not doing
# anything special, and it actually becomes a ghost all the time (e.g.
# ZBlk0.loadblkdata() ghostifies itself not to waste memory), this
# assumption is NOT correct. Thus, if a ghost ZBlk will be removed from
# live cache, corresponding block will MISS to invalidate its data.
# This practically can happen if LOBucket, that is part of
# ZBigFile.blktab and that was holding reference to this ZBlk, gets
# ghostified under live cache pressure.
def _p_invalidate(self):
# do real invalidation only once - else we already lost ._v_zfile last time
if self._p_state is GHOST:
......
......@@ -36,6 +36,7 @@ import weakref
import gc
from pytest import raises
import pytest; xfail = pytest.mark.xfail
from six.moves import range as xrange
......@@ -555,7 +556,7 @@ def test_bigfile_filezodb_vs_conn_migration():
# ZBlk should properly handle 'invalidate' messages from DB
# ( NOTE this test is almost dupped at test_zbigarray_vs_cache_invalidation() )
@func
def test_bigfile_filezodb_vs_cache_invalidation():
def _test_bigfile_filezodb_vs_cache_invalidation(_drop_cache):
root = dbopen()
conn = root._p_jar
db = conn.db()
......@@ -609,6 +610,15 @@ def test_bigfile_filezodb_vs_cache_invalidation():
ram_reclaim_all()
assert Blk(vma2, 0)[0] == 1
# FIXME: this simulates ZODB Connection cache pressure and currently
# removes ZBlk corresponding to blk #0 from conn2 cache.
# In turn this leads to conn2 missing that block invalidation on follow-up
# transaction boundary.
#
# See FIXME notes on ZBlkBase._p_invalidate() for detailed description.
#conn2._cache.minimize()
_drop_cache(conn2) # TODO change to just conn2._cache.minimize after issue is fixed
tm2.commit() # transaction boundary for t2
# data from tm1 should propagate -> ZODB -> ram pages for _ZBigFileH in conn2
......@@ -616,6 +626,12 @@ def test_bigfile_filezodb_vs_cache_invalidation():
del conn2, root2
def test_bigfile_filezodb_vs_cache_invalidation():
_test_bigfile_filezodb_vs_cache_invalidation(_drop_cache=lambda conn: None)
@xfail
def test_bigfile_filezodb_vs_cache_invalidation_with_cache_pressure():
_test_bigfile_filezodb_vs_cache_invalidation(_drop_cache=lambda conn: conn._cache.minimize())
# verify that conflicts on ZBlk are handled properly
# ( NOTE this test is almost dupped at test_zbigarray_vs_conflicts() )
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment