- 08 Oct, 2018 4 commits
-
-
Kirill Smelkov authored
Very brief and incomplete.
-
Kirill Smelkov authored
They are all either BTree or all Buckets. See https://github.com/zopefoundation/ZODB/blob/3.10.7-4-gb8d7a8567/src/BTrees/Development.txt#L231 for details
-
Kirill Smelkov authored
This is one of BTree invariants - check it on load.
-
Kirill Smelkov authored
2dba8607 (go/zodb/btree: New package to work with ZODB BTrees (draft)) added btree module with btree.BTree essentially being LOBTree (int64 key -> object). Since for wendelin.core we also need IOBTree (int32 key -> object), which is used in ZBlk1 https://lab.nexedi.com/nexedi/wendelin.core/blob/v0.12-6-g318efce/bigfile/file_zodb.py#L267 https://lab.nexedi.com/nexedi/wendelin.core/blob/v0.12-6-g318efce/bigfile/file_zodb.py#L374 let's turn btree module into template internally and generate code for both LOBTree and IOBTree. For the reference BTree/py takes similar approach with respect to templating.
-
- 02 Oct, 2018 1 commit
-
-
Kirill Smelkov authored
To know database state corresponding to the connection.
-
- 01 Oct, 2018 1 commit
-
-
Kirill Smelkov authored
The format of tid assumes ~ ns precision, and it is only formatted to µs precision by default. So don't truncate TimeStamp value when computing it from Tid, and perform the µs-rounding only on formatting. The float numbers are not always exactly as in python. For example the following program tidv = [ 0x0000000000000000, 0x0285cbac258bf266, 0x0285cbad27ae14e6, 0x037969f722a53488, 0x03b84285d71c57dd, 0x03caa84275fc1166, ] for tid in tidv: t = TimeStamp.TimeStamp(p64(tid)) print '0x%016x %s %.9f\t%.9f' % (tid, t, t.timeTime(), t.second()) prints: 0x0000000000000000 1900-01-01 00:00:00.000000 -2208988800.000000000 0.000000000 0x0285cbac258bf266 1979-01-03 21:00:08.800000 284245208.800000191 8.800000185 0x0285cbad27ae14e6 1979-01-03 21:01:09.300001 284245269.300001621 9.300001496 <-- ex here 0x037969f722a53488 2008-10-24 05:11:08.120000 1224825068.119999886 8.119999878 0x03b84285d71c57dd 2016-07-01 09:41:50.416574 1467366110.416574001 50.416573989 0x03caa84275fc1166 2018-10-01 16:34:27.652650 1538411667.652649879 27.652650112 the difference is due to floating point operation ordering, because TimeStamp.timeTime() looses precision - e.g. for marked case: In [8]: '%.10f' % (281566860.000000000 + 9.300001496) Out[8]: '281566869.3000015020' We don't try to mimic float64 behaviour to Python exactly - because it is even different for PURE_PYTHON=y or C TimeStamp implementations. However we don't limit due to that our timestamp precision to only 1µs. In other words we keep on maintaining exact compatibility with Python on printing, but timestamp values itself are now ~ ns precision.
-
- 28 Sep, 2018 1 commit
-
-
Kirill Smelkov authored
As https://github.com/kisielk/og-rek/pull/57 maybe shows []byte was pickling as string only unintentionally and that might change. We are already explicitly checking for string in corresponding index load place: https://lab.nexedi.com/kirr/neo/blob/2dba8607/go/zodb/storage/fs1/index.go#L282 so it is better we also explicitly save the bits as string. If we don't and https://github.com/kisielk/og-rek/pull/57 gets accepted, tests will fail: --- FAIL: TestIndexSaveLoad (0.00s) index_test.go:176: index load: /tmp/t-index893650059/458967662/1.fs.index: pickle @6: invalid oidPrefix: type []uint8 Traceback (most recent call last): File "./py/indexcmp", line 41, in <module> main() File "./py/indexcmp", line 29, in main d2 = fsIndex.load(path2) File "/home/kirr/src/wendelin/z/ZODB/src/ZODB/fsIndex.py", line 138, in load data[ensure_bytes(k)] = fsBucket().fromString(ensure_bytes(v)) File "/home/kirr/src/wendelin/z/ZODB/src/ZODB/fsIndex.py", line 71, in ensure_bytes return s.encode('ascii') if not isinstance(s, bytes) else s AttributeError: 'bytearray' object has no attribute 'encode' --- FAIL: TestIndexSaveToPy (0.04s) index_test.go:218: zodb/py read/compare index: exit status 1
-
- 09 Aug, 2018 14 commits
-
-
Kirill Smelkov authored
Provide minimal support for BTrees.LOBTree Get for now.
-
Kirill Smelkov authored
DB represents a handle to database at application level and contains pool of connections. DB.Open opens database connection. The connection will be automatically put back into DB pool for future reuse after corresponding transaction is complete. DB thus provides service to maintain live objects cache and reuse live objects from transaction to transaction. Note that it is possible to have several DB handles to the same database. This might be useful if application accesses distinctly different sets of objects in different transactions and knows beforehand which set it will be next time. Then, to avoid huge cache misses, it makes sense to keep DB handles opened for every possible case of application access. TODO handle invalidations.
-
Kirill Smelkov authored
For example Wendelin.core wcfs will need to keep some types of objects (e.g. BigFile index) always in RAM for efficiency. Provide corresponding interface that application could use to install such live-cache eviction decision-making tuning.
-
Kirill Smelkov authored
Connection represents an application-level view of a ZODB database. It has groups of in-RAM application-level objects associated with it. The objects are isolated from both changes in further database transactions and from changes to in-RAM objects in other connections. Connection, as objects group manager, is responsible for handling object -> object database references. For this to work it keeps {} oid -> obj dict and uses it to find already loaded object when another object persistently references particular oid. Since it related pydata handling of persistent references is correspondingly implemented in this patch. The dict must keep weak references on objects. The following text explains the rationale: if Connection keeps strong link to obj, just obj.PDeactivate will not fully release obj if there are no references to it from other objects: - deactivate will release obj state (ok) - but there will be still reference from connection `oid -> obj` map to this object, which means the object won't be garbage-collected. -> we can solve it by using "weak" pointers in the map. NOTE we cannot use regular map and arbitrarily manually "gc" entries there periodically: since for an obj we don't know whether other objects are referencing it, we can't just remove obj's oid from the map - if we do so and there are other live objects that reference obj, user code can still reach obj via those references. On the other hand, if another, not yet loaded, object also references obj and gets loaded, traversing reference from that loaded object will load second copy of obj, thus breaking 1 object in db <-> 1 live object invariant: A → B → C ↓ | D <--------- - - -> D2 (wrong) - A activate - D activate - B activate - D gc, A still keeps link on D - C activate -> it needs to get to D, but D was removed from objtab -> new D2 is wrongly created that's why we have to depend on Go's GC to know whether there are still live references left or not. And that in turn means finalizers and thus weak references. some link on the subject: https://groups.google.com/forum/#!topic/golang-nuts/PYWxjT2v6ps
-
Kirill Smelkov authored
We will need weak references to handle {} oid -> obj inside zodb.Connection . In Go world they often say that weak references are not needed at all. Please see however the next patch for detailed rationale for why weak references (finalizers and cooperation from Go's GC in other words) are _required_ in that case.
-
Kirill Smelkov authored
As promised in 354e0e51 (go/zodb: Persistent - the base type to implement IPersistent objects) add support to persistency machinery to set object state from python pickles serialized by ZODB/py. Persistent references are not yet handled. As promised add some very minimal persistent tests.
-
Kirill Smelkov authored
Currently we handle many ways ZODB could serialize a Python class in PyData.ClassName. Since we'll be using this functionality in other places soon - extract it into dedicated function. Since will be also frequently using class.__module__ + "." + class.__name__ don't inline it in ClassName and instead put it into pyclassPath() right away.
-
Kirill Smelkov authored
Add the base type, that types which want to implement persistency could embed, and this way inherit persistent functionality. For example type MyObject struct { Persistent ... } type myObjectState MyObject func (o *myObjectState) DropState() { ... } func (o *myObjectState) SetState(state *mem.Buf) error { ... } Here state management methods (DropState and SetState) will be automatically used by persistency machinery on activation and deactivation. For this to work MyObject class has to be registered to ZODB func init() { t := reflect.TypeOf zodb.RegisterClass("mymodule.MyObject", t(MyObject{}), t(myObjectState)) } and new instances of MyObject has to be created via zodb.NewPersistent: obj := zodb.NewPersistent(reflect.TypeOf(MyObject{}), jar).(*MyObject) SetState corresponds to __setstate__ in Python. However in Go version it is explicitly separated from class's public API - as it is the contract between a class and persistency machinery, not between the class and its user. Notice that SetState takes raw buffer as its argument. In the following patch we'll add SetState cousing (PySetState) that will be taking unpickled objects as the state - exactly how __setstate__ operates in Python. Classes will be able to choose whether to accept state as raw bytes or as a python object. The activation/deactivation is implemented via reference counting. Tests are pending (for PySetState).
-
Kirill Smelkov authored
Add to ZODB/go IPersistent - the interface that is used to represent in-RAM application-level objects that are mirroring objects in database. The interface is modelled after Python's IPersistent https://github.com/zopefoundation/ZODB/blob/3.10.7-4-gb8d7a8567/src/persistent/interfaces.py#L22 but is not exactly equal to it. In particular we support concurrent access to an object from multiple goroutines simultaneously. Due to concurrency support there is no STICKY state, because STICKY is used in CPython version to temporarily pin object in RAM briefly and is not safe to use from multiple threads there. Correspondingly the semantic of PActivate is a bit different from _p_activate - in Go, after an object has been activated, it is guaranteed that it will remain present in RAM until it is explicitly deactivated by user. Please see details of the activation protocol in IPersistent documentation. ZODB/py uses interface (IDataManager) for a persistent-object's jar, but in Go I decided, at least for now, to go without explicit interface at that level. For this reason a concrete type - Connection - will be used, and so its stub is also introduced in the patch, since IPersistent wants to return the connection via PJar.
-
Kirill Smelkov authored
We already have the functionality, just add an overview on how to implement drivers and use the most common ones.
-
Kirill Smelkov authored
There will be many text added to pkgdoc with new sections and per-section footnotes, and this way it is better to use a dedicated section for references instead of global footnote whose context might become unclear.
-
Kirill Smelkov authored
As we are going to add another - "Application layer" to zodb package, turn previous text overviewing IStorage & friends into "Storage layer" section.
-
Kirill Smelkov authored
We are too used to have this for granted with ZODB, but this property of object databases is not generally universally available in other databases.
-
Kirill Smelkov authored
- change "types, interfaces and errors" to API in the header. - it is not only data model, but also API that is tried to be reasonable compatible with ZODB/py. - an article before "the" transaction is better.
-
- 08 Aug, 2018 2 commits
-
-
Kirill Smelkov authored
Since 0 is valid Oid that is used for root database object, zero Oid value cannot be used as invalid oid. -> Provide explicit invalid Oid constant. For symmetry do similarly to Tid, even though zero Tid value is invalid Tid.
-
Kirill Smelkov authored
This should be in 1f92a4e2 (go/zodb: Allow to open a storage in "direct" mode - without local cache).
-
- 07 Aug, 2018 1 commit
-
-
Kirill Smelkov authored
Add Go counterpart of Python transaction package[1,2]. The Go version is not complete - in particular Transaction.Commit is not yet implemented. However even in this state it is useful to have transaction around for read-only transaction cases. The synchronization logic is more well-thought - in particular there is no dance around "new transaction", because in Go, contrary to ZODB/py, ZODB connections will be always opened under already started transaction. See "Synchronization" section in package documentation for details on this topic. [1] http://transaction.readthedocs.org [2] https://github.com/zopefoundation/transaction
-
- 25 Jul, 2018 1 commit
-
-
Kirill Smelkov authored
Previously TxnComplete was typed, but TxnPacked and TxnInprogress were untyped constants. Make all constants have TxnStatus type.
-
- 20 Jul, 2018 4 commits
-
-
Kirill Smelkov authored
The reason for this is that the buffer can be shared with other loaders via cache.
-
Kirill Smelkov authored
There are situations when we want to work with, or emulate, only subset of whole IStorage functionality, with one already established example being Cache. For this reason split IStorage interface into finer-grained ones. For now: - Loader - Prefetcher - Iterator - Committer - Notifier
-
Kirill Smelkov authored
Oid(0) represents root database object. We cannot change this.
-
Kirill Smelkov authored
Missed few places in 8685b742.
-
- 11 Jul, 2018 11 commits
-
-
Kirill Smelkov authored
While at it add draft overview of data model & friends to package documentation.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
We send output from tested process to master. We also print it to stdout,stderr so it appears in testnode logs. However till now it was like, whole output first read, and only then emitted to log as a whole, thus not allowing to oversee current test progress by watching testnode log tail. Fix it by implementing the teeing process manually. Some draft history related to this patch: lab.nexedi.com/kirr/neo/commit/aa370ca3 fixup! X neotest/runTestSuite: Tee tested process stdout,stderr to testnode logs incrementally lab.nexedi.com/kirr/neo/commit/096550b1 fixup! X neotest/runTestSuite: Tee tested process stdout,stderr to testnode logs incrementally lab.nexedi.com/kirr/neo/commit/63956f43 fixup! X neotest/runTestSuite: Tee tested process stdout,stderr to testnode logs incrementally lab.nexedi.com/kirr/neo/commit/b9819d0e X neotest/runTestSuite: Tee tested process stdout,stderr to testnode logs incrementally
-
Kirill Smelkov authored
The wrapper runs `neotest bench-local` in neotest SlapOS instance: https://lab.nexedi.com/nexedi/slapos/blob/ab8705f4/software/neotest/instance.cfg.in Extracted from nexedi/slapos!282
-
Kirill Smelkov authored
Add the program that reads results from either bench-local or bench-cluster neotest output and visualizes it. It uses benchlib.py module to read data in Go benchmark format(*), processes them and plots scalability and other graphs via matplotlib. There are lots of hacks and rough edges, and in particular callout coordinate calculation is completely wrong. However even in this state benchplot was used to prepare the graphs in http://navytux.spb.ru/~kirr/neo.html and http://navytux.spb.ru/~kirr/misc/neo·P4.html . Some draft history related to this patch: lab.nexedi.com/kirr/neo/commit/078c9ac3 X move benchlib to -> https://lab.nexedi.com/kirr/pygolang lab.nexedi.com/kirr/neo/commit/0edd5129 X benchplot: Teach it to understand benchmark names for partitioned NEO clusters lab.nexedi.com/kirr/neo/commit/a1dde3c9 X deco-rio timings lab.nexedi.com/kirr/neo/commit/916782b6 X normalize/convert units, so that disk and ping/tcp latencies could be plotted too lab.nexedi.com/kirr/neo/commit/f5fec740 X switch node info to labels; start adding that to plot lab.nexedi.com/kirr/neo/commit/906462a3 X neotest: Move cluster / node out fro benchmark name to label in environment lab.nexedi.com/kirr/neo/commit/cceca65f X benchplot: Start of automated plotting for neotest benchmark data lab.nexedi.com/kirr/neo/commit/a9b10a45 X benchlib/benchstat: Emit label:value info for several labels on one line, similary to go version lab.nexedi.com/kirr/neo/commit/502d9477 X benchlib: Python module to read & work with data in Go benchmark format (*) benchlib.py is now part of pygolang: https://pypi.org/project/pygolang .
-
Kirill Smelkov authored
These commands do full benchmarking for localhost and networked cases: - show system info - do server & client cpu benchmarks - do server disk benchmarks - for networked case: do network benchmarks - tail to either zbench-local or zbench-cluster It was full `neotest bench-local` that was used to prepare benchmarks for http://navytux.spb.ru/~kirr/neo.html and http://navytux.spb.ru/~kirr/misc/neo·P4.html
-
Kirill Smelkov authored
name old time/op new time/op delta unzlib/py/wczdata 20.8µs ± 2% 20.7µs ± 1% ~ (p=0.421 n=5+5) unzlib/go/wczdata 64.4µs ± 1% 21.3µs ± 0% -66.89% (p=0.008 n=5+5) unzlib/py/prod1-avg 4.00µs ± 1% 4.02µs ± 1% ~ (p=0.421 n=5+5) unzlib/go/prod1-avg 10.4µs ± 1% 4.3µs ± 1% -58.72% (p=0.008 n=5+5) There is also unsafe interface with czlib.UnsafeDecompress & friends which I had not tried because even using safe interface brings ~ 3x speedup.
-
Kirill Smelkov authored
name old time/op new time/op delta unzlib/py/wczdata 20.7µs ± 2% 20.8µs ± 2% ~ (p=0.548 n=5+5) unzlib/go/wczdata 70.6µs ± 0% 64.4µs ± 1% -8.85% (p=0.008 n=5+5) unzlib/py/prod1-avg 4.02µs ± 1% 4.00µs ± 1% ~ (p=0.167 n=5+5) unzlib/go/prod1-avg 15.2µs ± 0% 10.4µs ± 1% -31.59% (p=0.008 n=5+5) still on wczdata and prod1 much slower compared to py/c zlib.
-
Kirill Smelkov authored
NEO uses zlib compression for data, and this way client has to spend time decompressing it. Benchmark how much time zlib decompression takes. With stdlib zlib decompressor out of the box it looks like: name time/op unzlib/py/wczdata 20.7µs ± 2% unzlib/go/wczdata 70.6µs ± 0% unzlib/py/prod1-avg 4.02µs ± 1% unzlib/go/prod1-avg 15.2µs ± 0% i.e. much not in favour of Go. We'll be fixing that in the following patches.
-
Kirill Smelkov authored
With zwrk for ZODB being similar to what wrk is for HTTP. Rationale: simulating multiple clients is: 1. noisy - the timings from run to run are changing sometimes up to 50% 2. with significant additional overhead - there are constant OS-level process switches in between client processes and this prevents to actually create the load. 3. the above load from "2" actually takes resources from the server in localhost case. So let's switch to simulating many requests in lightweight way similarly to how it is done in wrk - in one process and not so many threads (it can be just 1) with many connections opened to server and epolly way to load it with Go providing epoll-goroutine matching. Example summarized zbench-local output: x/src/lab.nexedi.com/kirr/neo/go/neo/t$ benchstat -split node,cluster,dataset x.txt name time/object cluster:rio dataset:wczblk1-8 fs1-zhash.py 23.7µs ± 5% fs1-zhash.go 5.68µs ± 8% fs1-zhash.go+prefetch128 6.44µs ±16% zeo/py/fs1-zhash.py 376µs ± 4% zeo/py/fs1-zhash.go 130µs ± 3% zeo/py/fs1-zhash.go+prefetch128 72.3µs ± 4% neo/py(!log)/sqlite·P1-zhash.py 565µs ± 4% neo/py(!log)/sql·P1-zhash.py 491µs ± 8% cluster:rio dataset:prod1-1024 fs1-zhash.py 19.5µs ± 2% fs1-zhash.go 3.92µs ±12% fs1-zhash.go+prefetch128 4.42µs ± 6% zeo/py/fs1-zhash.py 365µs ± 9% zeo/py/fs1-zhash.go 120µs ± 1% zeo/py/fs1-zhash.go+prefetch128 68.4µs ± 3% neo/py(!log)/sqlite·P1-zhash.py 560µs ± 5% neo/py(!log)/sql·P1-zhash.py 482µs ± 8% name req/s cluster:rio dataset:wczblk1-8 fs1-zwrk.go·1 380k ± 2% fs1-zwrk.go·2 666k ± 3% fs1-zwrk.go·3 948k ± 1% fs1-zwrk.go·4 1.24M ± 1% fs1-zwrk.go·8 1.62M ± 0% fs1-zwrk.go·12 1.70M ± 0% fs1-zwrk.go·16 1.71M ± 0% zeo/py/fs1-zwrk.go·1 8.29k ± 1% zeo/py/fs1-zwrk.go·2 10.4k ± 2% zeo/py/fs1-zwrk.go·3 11.2k ± 1% zeo/py/fs1-zwrk.go·4 11.7k ± 1% zeo/py/fs1-zwrk.go·8 12.1k ± 2% zeo/py/fs1-zwrk.go·12 12.3k ± 1% zeo/py/fs1-zwrk.go·16 12.3k ± 2% cluster:rio dataset:prod1-1024 fs1-zwrk.go·1 594k ± 7% fs1-zwrk.go·2 1.14M ± 4% fs1-zwrk.go·3 1.60M ± 2% fs1-zwrk.go·4 2.09M ± 1% fs1-zwrk.go·8 2.74M ± 1% fs1-zwrk.go·12 2.76M ± 0% fs1-zwrk.go·16 2.76M ± 1% zeo/py/fs1-zwrk.go·1 9.42k ± 9% zeo/py/fs1-zwrk.go·2 10.4k ± 1% zeo/py/fs1-zwrk.go·3 11.4k ± 1% zeo/py/fs1-zwrk.go·4 11.7k ± 2% zeo/py/fs1-zwrk.go·8 12.4k ± 1% zeo/py/fs1-zwrk.go·12 12.5k ± 1% zeo/py/fs1-zwrk.go·16 13.4k ±11% name latency-time/object cluster:rio dataset:wczblk1-8 fs1-zwrk.go·1 2.63µs ± 2% fs1-zwrk.go·2 3.00µs ± 3% fs1-zwrk.go·3 3.16µs ± 1% fs1-zwrk.go·4 3.23µs ± 1% fs1-zwrk.go·8 4.94µs ± 0% fs1-zwrk.go·12 7.06µs ± 0% fs1-zwrk.go·16 9.36µs ± 0% zeo/py/fs1-zwrk.go·1 121µs ± 1% zeo/py/fs1-zwrk.go·2 192µs ± 2% zeo/py/fs1-zwrk.go·3 267µs ± 1% zeo/py/fs1-zwrk.go·4 343µs ± 1% zeo/py/fs1-zwrk.go·8 660µs ± 2% zeo/py/fs1-zwrk.go·12 977µs ± 1% zeo/py/fs1-zwrk.go·16 1.30ms ± 2% cluster:rio dataset:prod1-1024 fs1-zwrk.go·1 1.69µs ± 7% fs1-zwrk.go·2 1.76µs ± 4% fs1-zwrk.go·3 1.88µs ± 2% fs1-zwrk.go·4 1.91µs ± 1% fs1-zwrk.go·8 2.92µs ± 1% fs1-zwrk.go·12 4.34µs ± 0% fs1-zwrk.go·16 5.80µs ± 1% zeo/py/fs1-zwrk.go·1 107µs ± 9% zeo/py/fs1-zwrk.go·2 192µs ± 1% zeo/py/fs1-zwrk.go·3 263µs ± 1% zeo/py/fs1-zwrk.go·4 342µs ± 2% zeo/py/fs1-zwrk.go·8 648µs ± 1% zeo/py/fs1-zwrk.go·12 957µs ± 1% zeo/py/fs1-zwrk.go·16 1.20ms ±10% The scalability graphs in http://navytux.spb.ru/~kirr/neo.html were made with simulating client load by zwrk, not many client OS processes. http://navytux.spb.ru/~kirr/neo.html#performance-tests has some additional notes on zwrk. Some draft history related to this patch: lab.nexedi.com/kirr/neo/commit/ca0d828b X neotest: Tzwrk1 - place to control running time of 1 zwrk iteration lab.nexedi.com/kirr/neo/commit/bbfb5006 X zwrk: Make sure we warm up connections to all NEO storages when cluster is partitioned lab.nexedi.com/kirr/neo/commit/7f22bba6 X zwrk: New tool to simulate paralell load from multiple clients
-
Kirill Smelkov authored
zodb/go provides generic cache (see 7233b4c0 "zodb/go: In-RAM client cache") primarily in order for prefetch to work. However if we need to benchmark a storage with loading some objects several times, this cache can hide the actual time it takes for an object to load. For such use cases add NoCache open option so that opening does not create a cache and always conveys load operations directly to storage driver. The option will be used by zwrk tool (see next patch).
-