- 15 Dec, 2015 1 commit
-
-
Kirill Smelkov authored
No one was freeing RAMH structure itself, and thus ASAN reports e.g.: ==15935==ERROR: LeakSanitizer: detected memory leaks Direct leak of 32 byte(s) in 1 object(s) allocated from: #0 0x7f29c89f1001 in __interceptor_calloc (/usr/lib/x86_64-linux-gnu/libasan.so.2+0x94001) #1 0x401da1 in zalloc include/wendelin/utils.h:65 #2 0x408128 in shmfs_ramh_open bigfile/tests/../ram_shmfs.c:202 #3 0x407611 in ramh_open bigfile/tests/../ram.c:81 #4 0x402560 in fileh_open bigfile/tests/../virtmem.c:131 #5 0x427ca1 in test_pagefault_savestate bigfile/tests/test_virtmem.c:1022 #6 0x4281ba in main bigfile/tests/test_virtmem.c:1061 #7 0x7f29c83b8b44 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b44) NOTE similar leak remains open in ram_close(), but it is a bit involved to fix and the effort will be removed anyway after we switch to kernel virtual memory manager. Besides ramh are opened and closed all the time and ram only once.
-
- 05 Nov, 2015 1 commit
-
-
Kirill Smelkov authored
Currently wendelin.core does not work on e.g. Debian 7, because that distro has too old kernel without support for fallocate on tmpfs. But the diagnostics of failure is not clear and looks like just being out of memory: bigfile/tests/../virtmem.c:845 OOM BUG! what happens in fact is that - virtmem tries to allocate a page -> calls shmfs_alloc_page(), - fallocate() fails with "operation not supported" error code - virtmem sees this as page allocation failure, - tries to reclaim pages, - but there are no allocated pages at all -> OOM Detect whether fallocate() error is operational error, or simply "fallocate not supported" and if latter, report to user. Now it looks like: bigfile/tests/../ram_shmfs.c:129 shmfs_alloc_page WARN: fallocate() not supported bigfile/tests/../virtmem.c:845 OOM BUG! /cc @Tyagov
-
- 02 Nov, 2015 1 commit
-
-
Kirill Smelkov authored
Until now BigArrays could use only C-style ordering - where major index is the first one. Fortran ordering is the opposite - where major index is the last one - and is used in Fortran world and sometimes by scientists in other areas. As people keep on asking for Fortran-ordered BigArrays, let's add support for it. The essential code change is to change 0'th index to major index in __getitem__ and rest of the code. For the reference: ndarray memory layout for different orders is described here: http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html#internal-memory-layout-of-an-ndarray
-
- 02 Oct, 2015 9 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
Since 13c0c17c (bigfile/zodb: Format #1 which is optimized for small changes) each ZBlk is split into chunks, and the procedure to load/save blk data collects/disassembles the chunks. Previously in bigfile/zodb tests, we were testing by verifying only [0] element in each block, so this way most code for chunks collection/disassembling was not tested. Fix this aspect by making sure whole block content is as would-be expected in tests.
-
Kirill Smelkov authored
It was reStructuredText already, but links showing was ugly, especially when there were several links in the same line. Use kernelnewbies style to refer to commits, when particular commit url is not shown to user, only hyperlinked. This way it looks nicer.
-
Kirill Smelkov authored
With the target being to show it on pypi project page.
-
Kirill Smelkov authored
Now we have at least some readme, so this way it is more convenient.
-
Kirill Smelkov authored
This is a general readme about what wendelin.core is and how to use ZBigArray from user point of view.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
It was already there but very brief. Here we add text which tries to explain to reader what is generally going on in the program.
-
Kirill Smelkov authored
-
- 30 Sep, 2015 1 commit
-
-
Kirill Smelkov authored
ZBlk* objects are intermediate ZODB object in between data stored in ZODB and memory pages managed by virtmem. As such, after they do their job to either load data from DB to memory, or store from memory to DB, it is not needed to keep them alive with duplicate content thus only wasting memory. ZBlk0 cares about this detail via "deactivating" ._v_blkdata in loadblkdata() and __getstate__() prologues. ZBlk1 did the same for load path in loadblkdata() prologue, but for .__getstate__() it was not directly possible, because for ZBlk1 the state is IOBTree, not one non-persistent object, and thus it first needs to be processed by ZODB together with its subobjects on its way to storage and only then all they deactivated. So 13c0c17c (bigfile/zodb: Format #1 which is optimized for small changes) only put TODO for memory-page -> DB path about not wasting memory this way. But the problem is relatively easy to solve: - we can deactivate ZData objects (leaf objects in ZBlk1.chunktab btree) by hooking into ZData.__getstate__() prologue; - we also need to care to deactivate chunks right away, which setblkdata() loaded to compare .data and found them to be not changed This way we do not waste memory keeping intermediate ZData objects alive with the same content as memory page after commit. /cc @Tyagov
-
- 28 Sep, 2015 5 commits
-
-
Kirill Smelkov authored
ZBlk1.setblkdata() has logic to detect CHUNKSIZE change, and if so recreate whole chunktab from scratch for simplicity. There was a thinko however - len(chunk.data) == CHUNKSIZE is ok and actually very often happens when data does not have zeroes. Because of this off-by-1 comparison mistake, ZData objects were constantly created and thrown out instead of being reused which led to fast ZODB growth. Fix it. /reported-by @Tyagov
-
Kirill Smelkov authored
Our current workloads are mostly a lot of small data changes and this is what ZBlk1 was created for. Yes it has larger overhead for accessing data, but we already painted the way how to handle this in 13c0c17c (bigfile/zodb: Format #1 which is optimized for small changes) -> move data deduplication/management to server side. So be it ZBlk1 the default for now. /cc @Tyagov, @klaus
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
We check all pairs of possible formats and for every pair write some data at file 0 block, check that it is in source format; then change default format, write another data, check that file.blktab[0] changed its type. I.e. with this test we verify that the we can read data in old format and change it incrementally to new. Or the other way, if data is in new format, and we decide to migrate to old one, it also works. /cc @Tyagov, @klaus
-
- 24 Sep, 2015 9 commits
-
-
Kirill Smelkov authored
Our current approach is that each file block is represented by 1 zodb object, with block size being 2M. Even with trailing \0 trimming, which halves the overhead on average, DB size grows very fast if we do a lot of small appends or changes. So another format needs to be introduced which has lower overhead for storing small changes: In general, to represent BigFile as ZODB objects, each file block could be represented separately either as 1) one ZODB object, or (ZBlk0 - this what we have already) 2) group of ZODB objects (ZBlk1 - this is what we introduce) with top-level BTree directory #blk -> objects representing block. For "1" we have - low-overhead access time (only 1 object loaded from DB), but - high-overhead in terms of ZODB size (with FileStorage / ZEO, every change to a block causes it to be written into DB in full again) For "2" we have - low-overhead in terms of ZODB size (only part of a block is overwritten in DB on single change), but - high-overhead in terms of access time (several objects need to be loaded for 1 block) In general it is not possible to have low-overhead for both i) access-time, and ii) DB size, with approach where we do block objects representation / management on *client* side. On the other hand, if object management is moved to DB *server* side, it is possible to deduplicate them there and this way have low-overhead for both access-time and DB size with just client storing 1 object per file block. This will be our future approach after we teach NEO about object deduplication. ~~~~ As shown above in the last paragraph it is not possible to perform optimally on client side. Thus ZBlk1 should be only an intermediate solution until we move data management to DB server side, with main criteria for ZBlk1 to keep it simple. In this patch a simple scheme is used, where every block is divided into chunks organized via BTree. When a block part changes, only corresponding chunk is updated. Chunk size is chosen to be 4K which creates ~ 512 fanout for 2M block. DB size after tests is changed as follows: bigfile bigarray ZBlk0 24K 6200K ZBlk1 36K 36K ( slight size increase for bigfile tests is because of btree structures overhead ) Time to run tests stays approximately the same. /cc @Tyagov, @klaus
-
Kirill Smelkov authored
- current ZBlk becomes format 0 - write format can be selected via WENDELIN_CORE_ZBLK_FMT env var - upon writing a block we always make sure we write it in current write format - so if a block was previously written in one format, it could be changed on the next write. - tox is prepared to test all write formats (so far only ZBlk0 there). The reason is - in the next patch we'll introduce another format for blocks which is optimized for small changes.
-
Kirill Smelkov authored
If we aim to have several kinds of ZBlk, the functionality to invalidate a block and bind it to zfile is common and thus should be shared. If we introduce a base class, it also makes sense to document what .loadblkdata() and .setblkdata() should do there - in one place.
-
Kirill Smelkov authored
- we have logic to init ._v_zfile and ._v_blk there - the same for ._v_blkdata - logic to init it is there + it is better to set variables right from instance creation, not hoping "it will be set outside from master" NOTE ._v_blkdata = None means the block was not yet loaded and generally fits into logic how ZBlk operates and thus the change is ok.
-
Kirill Smelkov authored
A lot of times data in blocks come shorter than block size and the rest of the memory page is zeros (because it was pre-filled zeros by OS when page was allocated). Do a simple heuristic and trim those trailing zeros and not store them into DB. With this change size of DB created by running bigfile and bigarray tests changes as following: bigfile bigarray old 145M 35M new 24K 6M Trimming trailing zeros is currently done with str.rstrip('\0') which creates a copy. When/if needed this could be optimized to work in-place.
-
Kirill Smelkov authored
- to keep things uniform with counterpart .loadblkdata() - so that master do not mess with ZBlk internals and works only through interface - this way it will be possible to use several kinds of ZBlk.
-
Kirill Smelkov authored
All those functions move data between DB and ._v_blkdata and only master then connects data to memory page. Make that fact explicit.
-
Kirill Smelkov authored
Again, leftover from 4174b84a (bigfile: BigFile backend to store data in ZODB).
-
Kirill Smelkov authored
The comment was a leftover there from 4174b84a (bigfile: BigFile backend to store data in ZODB).
-
- 23 Sep, 2015 2 commits
-
-
Kirill Smelkov authored
i.e. it is ok to copy smaller data into larger buffer.
-
Kirill Smelkov authored
- not only multiple of 8. We can do it by using uint8 typed arrays, and it does not hurt performance: In [1]: from wendelin.lib.mem import bzero, memset, memcpy In [2]: A = bytearray(2*1024*1024) In [3]: B = bytearray(2*1024*1024) memcpy(B, A) bzero(A) memset(A, 0xff) old: 718 µs 227 µs / 1116 228 µs / 1055 (*) new: 718 µs 176 µs / 1080 175 µs / 1048 (*) the second number comes from e.g. In [8]: timeit bzero(A) The slowest run took 4.63 times longer than the fastest. This could mean that an intermediate result is being cached 10000 loops, best of 3: 228 µs per loop so the second number is more realistic and says performance stays aproximately the same and only slightly improves.
-
- 21 Sep, 2015 2 commits
-
-
Kirill Smelkov authored
When we serve indexing request, we first compute page range in backing file, which contains the result based on major index range, then mmap that file range and pick up result from there. Page range math was however not correct: e.g. for positive strides, last element's byte is (byte0_stop-1), NOT (byte0_stop - byte0_stride) which for cases where byte0_stop is just a bit after page boundary, can make a difference - page_max will be 1 page less what it should be and then whole ndarray view creation breaks: ... Module wendelin.bigarray, line 381, in __getitem__ view0 = ndarray(view0_shape, self._dtype, vma0, view0_offset, view0_stridev) ValueError: strides is incompatible with shape of requested array and size of buffer ( because vma0 was created less in size than what is needed to create view0_shape shaped array starting from view0_offset in vma0. ) Similar story for negative strides math - it was not correct neither. Fix it. /reported-by @Camata
-
Kirill Smelkov authored
We'll need this class in tests in the next patch.
-
- 11 Sep, 2015 1 commit
-
-
Kirill Smelkov authored
It was already done from the beginning in 4174b84a (bigfile: BigFile backend to store data in ZODB).
-
- 02 Sep, 2015 2 commits
-
-
Kirill Smelkov authored
bigfile/zodb/tests: Make sure _p_invalidate() in Zblk.loadblk() does not lead to reloading data updated Thanks to ZODB being MVCC this does not happen, but we better test explicitly.
-
Kirill Smelkov authored
We'll need it in other places in the next patch.
-
- 19 Aug, 2015 1 commit
-
-
Kirill Smelkov authored
-
- 18 Aug, 2015 4 commits
-
-
Kirill Smelkov authored
e.g. on .shape
-
Kirill Smelkov authored
When there is a conflict (on any object, but on ZBlk in particular) ZODB machinery calls its ._p_invalidate() twice: File ".../wendelin.core/bigfile/tests/test_filezodb.py", line 661, in test_bigfile_filezodb_vs_conflicts tm2.commit() # this should raise ConflictError and stay at 11 state File ".../transaction/_manager.py", line 111, in commit return self.get().commit() File ".../transaction/_transaction.py", line 271, in commit self._commitResources() File ".../transaction/_transaction.py", line 414, in _commitResources self._cleanup(L) File ".../transaction/_transaction.py", line 426, in _cleanup rm.abort(self) File ".../ZODB/Connection.py", line 436, in abort self._abort() File ".../ZODB/Connection.py", line 479, in _abort self._cache.invalidate(oid) File ".../wendelin.core/bigfile/file_zodb.py", line 148, in _p_invalidate traceback.print_stack() and File ".../wendelin.core/bigfile/tests/test_filezodb.py", line 661, in test_bigfile_filezodb_vs_conflicts tm2.commit() # this should raise ConflictError and stay at 11 state File ".../transaction/_manager.py", line 111, in commit return self.get().commit() File ".../transaction/_transaction.py", line 271, in commit self._commitResources() File ".../transaction/_transaction.py", line 416, in _commitResources self._synchronizers.map(lambda s: s.afterCompletion(self)) File ".../transaction/weakset.py", line 59, in map f(elt) File ".../transaction/_transaction.py", line 416, in <lambda> self._synchronizers.map(lambda s: s.afterCompletion(self)) File ".../ZODB/Connection.py", line 831, in _storage_sync self._flush_invalidations() File ".../ZODB/Connection.py", line 539, in _flush_invalidations self._cache.invalidate(invalidated) File ".../wendelin.core/bigfile/file_zodb.py", line 148, in _p_invalidate traceback.print_stack() i.e. first invalidation is done by commit cleanup: https://github.com/zopefoundation/transaction/blob/1.4.4/transaction/_transaction.py#L414 https://github.com/zopefoundation/ZODB/blob/3.10/src/ZODB/Connection.py#L479 and then Connection.afterCompletion() flushes invalidation again: https://github.com/zopefoundation/transaction/blob/1.4.4/transaction/_transaction.py#L416 https://github.com/zopefoundation/ZODB/blob/3.10/src/ZODB/Connection.py#L833 https://github.com/zopefoundation/ZODB/blob/3.10/src/ZODB/Connection.py#L539 If there was no conflict - there will be no ConflictError raised and thus no Transaction._cleanup() done in its ._commitResources() -> invalidation called only once. But with ConflictError - it is twice. Adjust ZBlk._p_invalidate() not to delve into real invalidation more than once - else we will fail, as ZBlk._v_zfile becomes unbound after invalidation done the first time.
-
Kirill Smelkov authored
All is currently handled correctly, but an observation is made that upon such invalidation we through away ._v_fileh i.e. we through away whole data cache just because an array was resized.
-
Kirill Smelkov authored
LivePersistent can go to ghost state, because invalidation cannot be ignored, i.e. they indicate the object has been changed externally. This does not break our logic for ZBigFile and ZBigArray as invalidations can happen only at transaction boundary, so during the course of transaction those classes are guaranteed to stay uptodate and thus not loose ._v_file and ._v_fileh (which is the reason they inherit from LivePersistent). it is ok to loose ._v_file and ._v_fileh at transaction boundary and become ghost - those objects will be recreated upon going back uptodate and will stay alive again during the whole transaction window. We care only not to loose e.g. ._v_fileh inside transaction, because loosing that data manager and thus data it manages inside transaction can break synchronization logic and forget changed-through-mmap data.
-
- 17 Aug, 2015 1 commit
-
-
Kirill Smelkov authored
-