- 02 Feb, 2018 4 commits
-
-
Kirill Smelkov authored
Extracted from nexedi/slapos!282
-
Kirill Smelkov authored
killall is optional (comes from psmisc package) and besides `killall runzeo` is not safe as it can be killing not only runzeo spawn from under neotest, but also from other processes. -> Switch to explicit kill with runzeo pid.
-
Kirill Smelkov authored
This is useful for development and just checking ZODB part of the benchmarks without going first through cpu and disk parts. run-client renamed to zbench-client for consistency.
-
Kirill Smelkov authored
-
- 01 Feb, 2018 2 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 25 Jan, 2018 2 commits
-
-
Kirill Smelkov authored
Amends c884bfd5 (X neo/protogen: Catch length checks overflows on decode)
-
Kirill Smelkov authored
-
- 24 Jan, 2018 4 commits
-
-
Kirill Smelkov authored
See https://github.com/golang/go/issues/19126#issuecomment-358743715. Examples: before after TEXT ·(*GetObject).neoMsgDecode(SB), NOSPLIT, $8-56 // zproto-marshal.go:1914 │TEXT ·(*GetObject).neoMsgDecode(SB), NOSPLIT, $0-56 // zproto-marshal.go:1914 // SUBQ $8, SP │ NO_LOCAL_POINTERS // MOVQ BP, (SP) (BP save) │ // FUNCDATA $0, gclocals·846769608458630ae82546dab39e913e(SB) (args) // LEAQ (SP), BP (BP init) │ // FUNCDATA $1, gclocals·33cdeccccebe80329f1fdbee7f5874cb(SB) (no locals) // FUNCDATA $0, gclocals·21e863e2261befa92f8534560680bbb6(SB) (args) │ MOVQ data+24(SP), AX FUNCDATA $1, gclocals·69c1753bd5f81501d95132d08af04464(SB) (locals) │ CMPQ AX, $24 // zproto-marshal.go:1915 MOVQ data+32(SP), AX │ JGE pc45 CMPQ AX, $24 // zproto-marshal.go:1915 │ MOVQ ·ErrDecodeOverflow+8(SB), AX // zproto-marshal.go:1924 JLT pc163 │ MOVQ ·ErrDecodeOverflow(SB), CX MOVQ data+24(SP), CX │ MOVQ $0, _r1+40(SP) MOVQ (CX), DX // zproto-marshal.go:1918 │ MOVQ CX, _r2+48(SP) BSWAPQ DX │ MOVQ AX, _r2+56(SP) MOVQ p+16(SP), BX │ RET MOVQ DX, (BX) │pc45: LEAQ -8(AX), DX // zproto-marshal.go:1919 │ MOVQ data+16(SP), AX MOVQ data+40(SP), SI │ MOVQ (AX), CX // zproto-marshal.go:1918 LEAQ -8(SI), DI │ BSWAPQ CX NEGQ DI │ MOVQ p+8(SP), DX SARQ $63, DI │ MOVQ CX, (DX) ANDQ $8, DI │ MOVQ 8(AX), CX // zproto-marshal.go:1919 CMPQ DX, $7 │ BSWAPQ CX JLS pc212 │ MOVQ CX, 8(DX) MOVQ (CX)(DI*1), DX │ MOVQ 16(AX), AX // zproto-marshal.go:1920 BSWAPQ DX │ BSWAPQ AX MOVQ DX, 8(BX) │ MOVQ AX, 16(DX) LEAQ -16(SI), DX // zproto-marshal.go:1920 │ MOVQ $24, _r1+40(SP) // zproto-marshal.go:1921 NEGQ DX │ MOVQ $0, _r2+48(SP) ADDQ $-16, AX │ MOVQ $0, _r2+56(SP) SARQ $63, DX │ RET ANDQ $16, DX │ CMPQ AX, $7 │ JLS pc205 │ MOVQ (CX)(DX*1), AX │ BSWAPQ AX │ MOVQ AX, 16(BX) │ MOVQ $24, _r1+48(SP) // zproto-marshal.go:1921 │ MOVQ $0, _r2+56(SP) │ MOVQ $0, _r2+64(SP) │ // MOVQ (SP), BP (BP restore) │ // ADDQ $8, SP (SP restore) │ RET │ pc163: │ MOVQ ·ErrDecodeOverflow(SB), AX // zproto-marshal.go:1924 │ MOVQ ·ErrDecodeOverflow+8(SB), CX │ MOVQ $0, _r1+48(SP) │ MOVQ AX, _r2+56(SP) │ MOVQ CX, _r2+64(SP) │ // MOVQ (SP), BP (BP restore) │ // ADDQ $8, SP (SP restore) │ RET │ pc205: │ PCDATA $0, $1 // zproto-marshal.go:1920 │ CALL runtime.panicindex(SB) │ UNDEF │ pc212: │ PCDATA $0, $1 // zproto-marshal.go:1919 │ CALL runtime.panicindex(SB) │ UNDEF │
-
Kirill Smelkov authored
Improves signal/noise ratio in generated decoder.
-
Kirill Smelkov authored
For example a list is encoded as l u32 [l]item itemType on decode len is read from data stream and for rest of data len(data) is checked to be < l*sizeof(item). However since l is u32 and sizeof(item) is just number the result of `l * sizeof(item)` has also u32 type. However it could overflow e.g. for l = 0x20000000 sizeof(item) = 8 with the l*sizeof(item) being = u32(0) (exactly zero) -> oops. Avoid the problem by doing all checking arithmetics with u64 ints.
-
Kirill Smelkov authored
-
- 23 Jan, 2018 2 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
* origin/master: client: kill .supportsTransactionalUndo() client: for read accesses, pick a random good node, connected or not storage: optimize storage layout of raw data for replication sqlite: remove useless AUTOINCREMENT for data.id (reuse of deleted ids is fine) storage: speed up reads by indexing 'obj' primarily by 'oid' (instead of 'tid') storage: pass schema of tables to migration methods storage: update backend version between each migration step
-
- 17 Jan, 2018 1 commit
-
-
Kirill Smelkov authored
Usage of supportsTransactionalUndo() was removed from ZODB in 2007 - see e.g. the following commits: https://github.com/zopefoundation/ZODB/commit/a06bfc03 https://github.com/zopefoundation/ZODB/commit/e667b022 https://github.com/zopefoundation/ZODB/commit/f595f7e7 ... /reviewed-by @vpelletier /reviewed-on !8
-
- 16 Jan, 2018 7 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
* y/go: (25 commits) go/zodb/zodbtools: TODO (cmp, analyze) go/zodb/zodbtools: Catobj go/zodb/zodbtools: Info go/zodb/zodbtools: Dump go/zodb: Start of zodbtools - tools for managing ZODB databases go/zodb/fs1tools: Notes about other possible useful commands currently being there on ZODB/py side go/zodb/fs1tools: Reindex, Verify-index go/zodb/fs1tools: Dump go/zodb/fs1: Start fs1tools - tools for managing and maintaining ZODB FileStorage v1 databases go/zodb/fs1: My notes on I/O go/zodb/fs1: Register FileStorage to zodb & wks go/zodb/fs1: Actual FileStorage ZODB driver go/zodb/fs1: Add routines to (re)build and verify index from/wrt original FileStorage data go/zodb/fs1: Index save/load go/zodb/fs1: BTree specialized with KEY=zodb.Oid, VALUE=int64 go/zodb: Start of FileStorage support go/zodb: Way for storage-drivers to be registered and for clients to open them by URL zodb/go: In-RAM client cache go/zodb: Minimal serialization compatibility with ZODB/py go/zodb: Tid connection with time ...
-
- 15 Jan, 2018 18 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
`zodb catobj` command to dump content of an object - similarly to `git cat-file`. Two modes: raw and verbose with `zodb dump` like headers for the object present. There is no such command currently in zodbtools/py.
-
Kirill Smelkov authored
Command to print general information about a ZODB database. Same as `zodb info` in zodbtools/py.
-
Kirill Smelkov authored
Add `zodb dump` command to dump arbitrary ZODB database in generic format. The actual dump protocol being used here is the same as in zodbtools/py with https://lab.nexedi.com/zodbtools/merge_requests/3 applied. (the MR there is OK and is just waiting for upstream ZODB to negotiate a way to retrieve transaction extension data in raw form).
-
Kirill Smelkov authored
Add zodbtools which is generic (contrast to fs1tools) set of ZODB managing utilities. Only package and command infrastructure here - actual commands will follow up in the next patches.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
Add commands for FileStorage index maintainance: manually rebuild the index and to performe index verification.
-
Kirill Smelkov authored
Add various FileStorage-specific dump commands with output being bit-to-bit exact with the following ZODB/py FileStorage tools: - fsdump.py - fsdump.py (verbose dumper) - fstail.py Please see the patch for links about this dump formats.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
Build FileStorage ZODB driver out of format record loading/decoding and index routines we just added in previous patches. The driver supports only read-only mode so far. Promised tests for data format interoperability with ZODB/py are added.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
Build index type on top of fsb.Tree introduced in the previous patch and add routines to save and load it to/from disk. We ensure ZODB/py compatibility via generating test FileStorage database + its index and checking we can load index from it and also that if we save an index ZODB/py can load it back. FileStorage index is hard to get bit-to-bit identical since this index uses python pickles which can encode the same objects in several different ways.
-
Kirill Smelkov authored
FileStorage index maps oid to file position storing latest data record for this oid. This index is naturally to implement via BTree as e.g. ZODB/py does. In Go world there is github.com/cznic/b BTree library but without specialization and working via interface{} it is slower than it could be and allocates a lot. So generate specialized version of that code with key and value types exactly suitable for FileStorage indexing. We use a bit patched b version with speed ups for bulk-loading data via regular point-ingestion BTree entry point: https://lab.nexedi.com/kirr/b x/refill The patches has not been upstreamed because it slows down general case a bit (only a bit, but still this is a "no" to me), and because with dedicated bulk-loading API it could be possible to still load data several times faster. Still current version is enough for not very-huge indices. Btw ZODB/py does the same (see fsBucket + friends).
-
Kirill Smelkov authored
Start implementing FileStorage support by adding code to load/decode FileStorage records and way to iterate a FileStorage. Tests will come in a later patch together with ZODB-level loading support.
-
Kirill Smelkov authored
Storage drivers can register themselves via zodb.RegisterDriver. Later cliens can request to open a storage by URL via zodb.OpenStorage. The opener will lookup driver registry and wrap created driver instance with common layer with cache etc to turn an IStorageDriver into fully working IStorage.
-
Kirill Smelkov authored
The cache is needed so that we can provide IStorage.Prefetch functionality generally wrapped on top of a storage driver: when an object is loaded, the loading itself consists of steps: 1. start loading object into cache, 2. wait for the loading to complete. This way Prefetch is naturally only "1" - start loading object into cache but do not wait for the loading to be complete. Go's goroutines naturally help here where we can spawn every such loading into its own goroutine instead of explicitly programming loading in terms of a state machine. Since this cache is mainly needed for Prefetch to work, not to actually cache data (though it works as cache for repeating access too), the goal when writing it was to add minimal overhead for "data-not-yet-in-cache" case. Current state we are not completely there yet but the latency is acceptable - depending on the workload the cache layer adds ~ 0.5 - 1 - 3µs to loading times.
-