- 17 Dec, 2020 1 commit
-
-
Kirill Smelkov authored
See go123@3354b401 and go123@b03d65ff The wrapping logic in LinkListener goes away because Accept from xnet now supports cancellation.
-
- 30 Nov, 2020 4 commits
-
-
Kirill Smelkov authored
It was not the case with ZEO4 server because we did not had ZEO@bf80d23d yet.
-
Kirill Smelkov authored
For ZEO it is not strictly required, but for upcoming NEO for example NEOCluster needs to shutdown gracefully, else there are processes left for e.g. storage nodes and they dump somthing as below on the terminal after tests completion: === RUN TestLoad 2020/10/21 14:33:00 zodb: FIXME: open ../zodb/storage/fs1/testdata/1.fs: raw cache is not ready for invalidations -> NoCache forced === RUN TestLoad/py I: runneo.py: /tmp/neo445013868/1: Started master(s): 127.0.0.1:24661 WARNING: This is not the recommended way to import data to NEO: you should use the Importer backend instead. NEO also does not implement IStorageRestoreable interface, which means that undo information is not preserved when using this tool: conflict resolution could happen when undoing an old transaction. Migrating from ../zodb/storage/fs1/testdata/1.fs to 127.0.0.1:24661 Migration done in 0.19877 --- PASS: TestLoad (0.75s) --- PASS: TestLoad/py (0.74s) PASS ok lab.nexedi.com/kirr/neo/go/neo 0.749s (neo) (z-dev) (g.env) kirr@deco:~/src/neo/src/lab.nexedi.com/kirr/neo/go/neo$ Traceback (most recent call last): File "/home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/neo/tests/functional/__init__.py", line 182, in start getattr(neo.scripts, command).main() File "/home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/neo/scripts/neostorage.py", line 66, in main app.run() File "/home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/neo/storage/app.py", line 147, in run self._run() File "/home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/neo/storage/app.py", line 178, in _run self.doOperation() File "/home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/neo/storage/app.py", line 266, in doOperation poll() File "/home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/neo/storage/app.py", line 87, in _poll self.em.poll(1) File "/home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/neo/lib/event.py", line 155, in poll self._poll(blocking) File "/home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/neo/lib/event.py", line 253, in _poll timeout_object.onTimeout() File "/home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/neo/lib/event.py", line 259, in onTimeout on_timeout() File "/home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/neo/storage/database/manager.py", line 207, in _deferredCommit self.commit() File "/home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/neo/storage/database/manager.py", line 193, in commit self._commit() File "/home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/neo/storage/database/sqlite.py", line 90, in _commit retry_if_locked(self.conn.commit) File "/home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/neo/storage/database/sqlite.py", line 45, in retry_if_locked return f(*args) OperationalError: disk I/O error
-
Kirill Smelkov authored
And use it to verify other storages.
-
Kirill Smelkov authored
See nexedi/nxdtest@53064e71
-
- 05 Nov, 2020 4 commits
-
-
Kirill Smelkov authored
Go through zrpc and add error context to every function that can create error. This is similar to what NEO does in neonet. The reason to do this was the following obscure error from wcfs: E1105 11:32:33.295497 24639 wcfs.go:2505] zwatch zeo://:28359: EOF which, after this patch, is now E1105 12:53:15.922052 30731 wcfs.go:2510] zwatch zeo://:27024: zlink 127.0.0.1:47768 - 127.0.0.1:27024: recvPkt: EOF
-
Kirill Smelkov authored
NEO was already doing this, but let's use xio.NoEOF everywhere for uniformity.
-
Kirill Smelkov authored
If remote peer closes the link half-way of packet data - turn EOF into ErrUnexpectedEOF.
-
Kirill Smelkov authored
We will need to use those utilities for ZEO and NEO.
-
- 04 Nov, 2020 4 commits
-
-
Kirill Smelkov authored
We recently hit "decode: buffer overflow" errors due to mismatch in between NEO/go and NEO/py (see previous patch). With dumpio=true that problem was showing itself as 127.0.0.1:50624 > 127.0.0.1:41863: .5 GetObject &{0000000000000000 ffffffffffffffff 0285cbac258bf266} 127.0.0.1:50624 < 127.0.0.1:41863: .5 (proto.Error) decode: buffer overflow; #24 [24]: 00 00 00 03 00 00 00 10 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 Error ACK ; [24]tail: 00 00 00 03 00 00 00 10 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 xtesting.go:306: load 0285cbac258bf265:0000000000000000: returned err unexpected: have: neo://1@127.0.0.1:24078: load 0285cbac258bf265:0000000000000000: 127.0.0.1:50624 - 127.0.0.1:41863 .5: decode: decode: buffer overflow want: neo://1@127.0.0.1:24078: load 0285cbac258bf265:0000000000000000: 0000000000000000: object was not yet created Here after printing error and dumping all bytes from packet payload, it was printing again message type, message value from to-be-decoded place (which is zero-initialized) and data bytes again. -> Fix it not to print anything after dump of payload data: 127.0.0.1:60518 > 127.0.0.1:46719: .5 GetObject &{0000000000000000 ffffffffffffffff 0285cbac258bf266} 127.0.0.1:60518 < 127.0.0.1:46719: .5 (Error) decode: buffer overflow; #24 [24]: 00 00 00 03 00 00 00 10 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 xtesting.go:306: load 0285cbac258bf265:0000000000000000: returned err unexpected: have: neo://1@127.0.0.1:27853: load 0285cbac258bf265:0000000000000000: 127.0.0.1:60518 - 127.0.0.1:46719 .5: decode: decode: buffer overflow want: neo://1@127.0.0.1:27853: load 0285cbac258bf265:0000000000000000: 0000000000000000: object was not yet created
-
Kirill Smelkov authored
In commit 5f13cc85 I changed enum encodings from int32 to int8, but did not noticed that NEO/py commit 52db5607 ("protocol: a single byte is more than enough to encode enums") despite specified intent and ErrorCodes being marked with @Enum, changed encoding only for fields that are marked as PEnum in structures. In NEO/py the Error.code field is still marked as PNumber which encodes as 32-bit integer on the wire. -> Fix it back. With recent xtesting.DrvTestLoad update this error was manifesting itself as (on @t branch): --- FAIL: TestLoad (2.08s) --- FAIL: TestLoad/py (2.07s) xtesting.go:306: load 0285cbac258bf265:0000000000000000: returned err unexpected: have: neo://1@127.0.0.1:32731: load 0285cbac258bf265:0000000000000000: 127.0.0.1:42288 - 127.0.0.1:37109 .5: decode: decode: buffer overflow want: neo://1@127.0.0.1:32731: load 0285cbac258bf265:0000000000000000: 0000000000000000: object was not yet created xtesting.go:306: load 0285cbac3d0369e5:0000000000000001: returned err unexpected: have: neo://1@127.0.0.1:32731: load 0285cbac3d0369e5:0000000000000001: 127.0.0.1:42288 - 127.0.0.1:37109 .13: decode: decode: buffer overflow want: neo://1@127.0.0.1:32731: load 0285cbac3d0369e5:0000000000000001: 0000000000000001: object was not yet created xtesting.go:306: load 0285cbac41b4e832:0000000000000002: returned err unexpected: have: neo://1@127.0.0.1:32731: load 0285cbac41b4e832:0000000000000002: 127.0.0.1:42288 - 127.0.0.1:37109 .21: decode: decode: buffer overflow want: neo://1@127.0.0.1:32731: load 0285cbac41b4e832:0000000000000002: 0000000000000002: object was not yet created xtesting.go:306: load 0285cbac4666667f:0000000000000003: returned err unexpected: have: neo://1@127.0.0.1:32731: load 0285cbac4666667f:0000000000000003: 127.0.0.1:42288 - 127.0.0.1:37109 .29: decode: decode: buffer overflow want: neo://1@127.0.0.1:32731: load 0285cbac4666667f:0000000000000003: 0000000000000003: object was not yet created xtesting.go:306: load 0285cbac4fc96318:0000000000000004: returned err unexpected: have: neo://1@127.0.0.1:32731: load 0285cbac4fc96318:0000000000000004: 127.0.0.1:42288 - 127.0.0.1:37109 .41: decode: decode: buffer overflow want: neo://1@127.0.0.1:32731: load 0285cbac4fc96318:0000000000000004: 0000000000000004: object was not yet created xtesting.go:306: load 0285cbac547ae165:0000000000000005: returned err unexpected: have: neo://1@127.0.0.1:32731: load 0285cbac547ae165:0000000000000005: 127.0.0.1:42288 - 127.0.0.1:37109 .49: decode: decode: buffer overflow want: neo://1@127.0.0.1:32731: load 0285cbac547ae165:0000000000000005: 0000000000000005: object was not yet created xtesting.go:306: load 0285cbac628f5c4b:0000000000000006: returned err unexpected: have: neo://1@127.0.0.1:32731: load 0285cbac628f5c4b:0000000000000006: 127.0.0.1:42288 - 127.0.0.1:37109 .65: decode: decode: buffer overflow want: neo://1@127.0.0.1:32731: load 0285cbac628f5c4b:0000000000000006: 0000000000000006: object was not yet created xtesting.go:306: load 0285cbaca444447f:0000000000000007: returned err unexpected: have: neo://1@127.0.0.1:32731: load 0285cbaca444447f:0000000000000007: 127.0.0.1:42288 - 127.0.0.1:37109 .125: decode: decode: buffer overflow want: neo://1@127.0.0.1:32731: load 0285cbaca444447f:0000000000000007: 0000000000000007: object was not yet created xtesting.go:306: load 0285cbacbbbbbbff:0000000000000008: returned err unexpected: have: neo://1@127.0.0.1:32731: load 0285cbacbbbbbbff:0000000000000008: 127.0.0.1:42288 - 127.0.0.1:37109 .149: decode: decode: buffer overflow want: neo://1@127.0.0.1:32731: load 0285cbacbbbbbbff:0000000000000008: 0000000000000008: object was not yet created xtesting.go:306: load 0285cbad80da7498:0000000000000009: returned err unexpected: have: neo://1@127.0.0.1:32731: load 0285cbad80da7498:0000000000000009: 127.0.0.1:42288 - 127.0.0.1:37109 .269: decode: decode: buffer overflow want: neo://1@127.0.0.1:32731: load 0285cbad80da7498:0000000000000009: 0000000000000009: object was not yet created xtesting.go:331: load 7fffffffffffffff:000000000000000a: returned err unexpected: have: neo://1@127.0.0.1:32731: load 7fffffffffffffff:000000000000000a: 127.0.0.1:42288 - 127.0.0.1:37109 .295: decode: decode: buffer overflow want: neo://1@127.0.0.1:32731: load 7fffffffffffffff:000000000000000a: 000000000000000a: no such object
-
Kirill Smelkov authored
Before this patch and with updated DrvTestLoad (see previous patch) it was failing as: --- FAIL: TestLoad (0.52s) --- FAIL: TestLoad/py/msgpack=false (0.25s) xtesting.go:306: load 0285cbac258bf265:0000000000000000: returned err unexpected: have: /tmp/zeo535364855/1.fs.zeosock: load 0285cbac258bf265:0000000000000000: /tmp/zeo535364855/1.fs.zeosock: call loadBefore: unexpected reply: got ogórek.None{}; expect 3-tuple want: /tmp/zeo535364855/1.fs.zeosock: load 0285cbac258bf265:0000000000000000: 0000000000000000: no such object xtesting.go:306: load 0285cbac3d0369e5:0000000000000001: returned err unexpected: have: /tmp/zeo535364855/1.fs.zeosock: load 0285cbac3d0369e5:0000000000000001: /tmp/zeo535364855/1.fs.zeosock: call loadBefore: unexpected reply: got ogórek.None{}; expect 3-tuple want: /tmp/zeo535364855/1.fs.zeosock: load 0285cbac3d0369e5:0000000000000001: 0000000000000001: no such object xtesting.go:306: load 0285cbac41b4e832:0000000000000002: returned err unexpected: have: /tmp/zeo535364855/1.fs.zeosock: load 0285cbac41b4e832:0000000000000002: /tmp/zeo535364855/1.fs.zeosock: call loadBefore: unexpected reply: got ogórek.None{}; expect 3-tuple want: /tmp/zeo535364855/1.fs.zeosock: load 0285cbac41b4e832:0000000000000002: 0000000000000002: no such object xtesting.go:306: load 0285cbac4666667f:0000000000000003: returned err unexpected: have: /tmp/zeo535364855/1.fs.zeosock: load 0285cbac4666667f:0000000000000003: /tmp/zeo535364855/1.fs.zeosock: call loadBefore: unexpected reply: got ogórek.None{}; expect 3-tuple want: /tmp/zeo535364855/1.fs.zeosock: load 0285cbac4666667f:0000000000000003: 0000000000000003: no such object xtesting.go:306: load 0285cbac4fc96318:0000000000000004: returned err unexpected: have: /tmp/zeo535364855/1.fs.zeosock: load 0285cbac4fc96318:0000000000000004: /tmp/zeo535364855/1.fs.zeosock: call loadBefore: unexpected reply: got ogórek.None{}; expect 3-tuple want: /tmp/zeo535364855/1.fs.zeosock: load 0285cbac4fc96318:0000000000000004: 0000000000000004: no such object xtesting.go:306: load 0285cbac547ae165:0000000000000005: returned err unexpected: have: /tmp/zeo535364855/1.fs.zeosock: load 0285cbac547ae165:0000000000000005: /tmp/zeo535364855/1.fs.zeosock: call loadBefore: unexpected reply: got ogórek.None{}; expect 3-tuple want: /tmp/zeo535364855/1.fs.zeosock: load 0285cbac547ae165:0000000000000005: 0000000000000005: no such object xtesting.go:306: load 0285cbac628f5c4b:0000000000000006: returned err unexpected: have: /tmp/zeo535364855/1.fs.zeosock: load 0285cbac628f5c4b:0000000000000006: /tmp/zeo535364855/1.fs.zeosock: call loadBefore: unexpected reply: got ogórek.None{}; expect 3-tuple want: /tmp/zeo535364855/1.fs.zeosock: load 0285cbac628f5c4b:0000000000000006: 0000000000000006: no such object xtesting.go:306: load 0285cbaca444447f:0000000000000007: returned err unexpected: have: /tmp/zeo535364855/1.fs.zeosock: load 0285cbaca444447f:0000000000000007: /tmp/zeo535364855/1.fs.zeosock: call loadBefore: unexpected reply: got ogórek.None{}; expect 3-tuple want: /tmp/zeo535364855/1.fs.zeosock: load 0285cbaca444447f:0000000000000007: 0000000000000007: no such object xtesting.go:306: load 0285cbacbbbbbbff:0000000000000008: returned err unexpected: have: /tmp/zeo535364855/1.fs.zeosock: load 0285cbacbbbbbbff:0000000000000008: /tmp/zeo535364855/1.fs.zeosock: call loadBefore: unexpected reply: got ogórek.None{}; expect 3-tuple want: /tmp/zeo535364855/1.fs.zeosock: load 0285cbacbbbbbbff:0000000000000008: 0000000000000008: no such object xtesting.go:306: load 0285cbad80da7498:0000000000000009: returned err unexpected: have: /tmp/zeo535364855/1.fs.zeosock: load 0285cbad80da7498:0000000000000009: /tmp/zeo535364855/1.fs.zeosock: call loadBefore: unexpected reply: got ogórek.None{}; expect 3-tuple want: /tmp/zeo535364855/1.fs.zeosock: load 0285cbad80da7498:0000000000000009: 0000000000000009: no such object --- FAIL: TestLoad/py/msgpack=true (0.26s) xtesting.go:306: load 0285cbac258bf265:0000000000000000: returned err unexpected: have: /tmp/zeo794664426/1.fs.zeosock: load 0285cbac258bf265:0000000000000000: /tmp/zeo794664426/1.fs.zeosock: call loadBefore: zlink is closed want: /tmp/zeo794664426/1.fs.zeosock: load 0285cbac258bf265:0000000000000000: 0000000000000000: no such object xtesting.go:306: load 0285cbac3d0369e5:0000000000000001: returned err unexpected: have: /tmp/zeo794664426/1.fs.zeosock: load 0285cbac3d0369e5:0000000000000001: /tmp/zeo794664426/1.fs.zeosock: call loadBefore: zlink is closed want: /tmp/zeo794664426/1.fs.zeosock: load 0285cbac3d0369e5:0000000000000001: 0000000000000001: no such object xtesting.go:306: load 0285cbac41b4e832:0000000000000002: returned err unexpected: have: /tmp/zeo794664426/1.fs.zeosock: load 0285cbac41b4e832:0000000000000002: /tmp/zeo794664426/1.fs.zeosock: call loadBefore: zlink is closed want: /tmp/zeo794664426/1.fs.zeosock: load 0285cbac41b4e832:0000000000000002: 0000000000000002: no such object xtesting.go:306: load 0285cbac4666667f:0000000000000003: returned err unexpected: have: /tmp/zeo794664426/1.fs.zeosock: load 0285cbac4666667f:0000000000000003: /tmp/zeo794664426/1.fs.zeosock: call loadBefore: zlink is closed want: /tmp/zeo794664426/1.fs.zeosock: load 0285cbac4666667f:0000000000000003: 0000000000000003: no such object xtesting.go:306: load 0285cbac4fc96318:0000000000000004: returned err unexpected: have: /tmp/zeo794664426/1.fs.zeosock: load 0285cbac4fc96318:0000000000000004: /tmp/zeo794664426/1.fs.zeosock: call loadBefore: zlink is closed want: /tmp/zeo794664426/1.fs.zeosock: load 0285cbac4fc96318:0000000000000004: 0000000000000004: no such object xtesting.go:306: load 0285cbac547ae165:0000000000000005: returned err unexpected: have: /tmp/zeo794664426/1.fs.zeosock: load 0285cbac547ae165:0000000000000005: /tmp/zeo794664426/1.fs.zeosock: call loadBefore: zlink is closed want: /tmp/zeo794664426/1.fs.zeosock: load 0285cbac547ae165:0000000000000005: 0000000000000005: no such object xtesting.go:306: load 0285cbac628f5c4b:0000000000000006: returned err unexpected: have: /tmp/zeo794664426/1.fs.zeosock: load 0285cbac628f5c4b:0000000000000006: /tmp/zeo794664426/1.fs.zeosock: call loadBefore: zlink is closed want: /tmp/zeo794664426/1.fs.zeosock: load 0285cbac628f5c4b:0000000000000006: 0000000000000006: no such object xtesting.go:306: load 0285cbaca444447f:0000000000000007: returned err unexpected: have: /tmp/zeo794664426/1.fs.zeosock: load 0285cbaca444447f:0000000000000007: /tmp/zeo794664426/1.fs.zeosock: call loadBefore: zlink is closed want: /tmp/zeo794664426/1.fs.zeosock: load 0285cbaca444447f:0000000000000007: 0000000000000007: no such object xtesting.go:306: load 0285cbacbbbbbbff:0000000000000008: returned err unexpected: have: /tmp/zeo794664426/1.fs.zeosock: load 0285cbacbbbbbbff:0000000000000008: /tmp/zeo794664426/1.fs.zeosock: call loadBefore: zlink is closed want: /tmp/zeo794664426/1.fs.zeosock: load 0285cbacbbbbbbff:0000000000000008: 0000000000000008: no such object xtesting.go:306: load 0285cbad80da7498:0000000000000009: returned err unexpected: have: /tmp/zeo794664426/1.fs.zeosock: load 0285cbad80da7498:0000000000000009: /tmp/zeo794664426/1.fs.zeosock: call loadBefore: zlink is closed want: /tmp/zeo794664426/1.fs.zeosock: load 0285cbad80da7498:0000000000000009: 0000000000000009: no such object
-
Kirill Smelkov authored
This currently fails on ZEO and NEO. Those storages will be fixed in the following patches (NEO only on @t branch for now).
-
- 03 Nov, 2020 2 commits
-
-
Kirill Smelkov authored
Starting from Go1.10 `go test` caches successful test results[1]. However we want to torture the implementation and run the tests for real each time `neotest test-go` is ran. -> Disable tests caching [1] https://golang.org/doc/go1.10#test
-
Kirill Smelkov authored
See: - https://lab.nexedi.com/nexedi/nxdtest - https://lab.nexedi.com/nexedi/nxdtest/blob/master/nxdtest/__init__.py - nexedi/nxdtest@d575236a - nexedi/slapos!839 -> Leave only .nxdtest file with neotest-specific bits to be run by nxdtest driver.
-
- 01 Nov, 2020 22 commits
-
-
Kirill Smelkov authored
* origin/old-proto: qa: skip broken ZODB test client: fix race with invalidations when starting a new transaction on ZODB 5 Code clean-up, comment fixes master: fix crash in STARTING_BACKUP when connecting to an upstream secondary master mysql: workaround for MDEV-20693 client: inline Application._loadFromCache client: replace global load lock by a per-oid one client: unindent code client: remove load lock in tpc_finish qa: check cache in testExternalInvalidation qa: comment testExternalInvalidation2
-
Kirill Smelkov authored
-
Kirill Smelkov authored
This protocol version corresponds to protocol version used by NEO/py v1.12 and was set in NEO/py commit c6453626 (Bump protocol version). The protocol definition was updated to match that NEO/py release in the previous patches.
-
Kirill Smelkov authored
This corresponds to NEO/py commit 2a27239d (tweak: add option to simulate).
-
Kirill Smelkov authored
This corresponds to NEO/py commit c2c9e99d (Better error reporting from the master to neoctl for denied requests).
-
Kirill Smelkov authored
This corresponds to NEO/py commit 21190ee7 (Make 'neoctl print pt' report the number of replicas).
-
Kirill Smelkov authored
This corresponds to NEO/py commit ef5fc508 (Make the number of replicas modifiable when the cluster is running). One important change in the protocol is that Client no longer queries Master for partition table - instead M pushed partTab to C right after identification (after pushing nodeTab). See also: https://neo.nexedi.com/P-NEO-Protocol.Specification.2019?portal_skin=CI_slideshow#/9/5
-
Kirill Smelkov authored
This corresponds to NEO/py commit 27e3f620 (New --new-nid storage option for fast cloning).
-
Kirill Smelkov authored
NEO 1.12 * tag 'v1.12': (28 commits) Release version 1.12 master: reject drop/tweak ctl commands that could lead to unwanted status qa: extend test reproducing the migration of a big ZODB to NEO neoctl: better display of full partition tables Bump protocol version tweak: add option to simulate tweak: do not crash when trying to remove all nodes tweak: do not touch cells of nodes that are intended to be dropped Better error reporting from the master to neoctl for denied requests Make 'neoctl print pt' report the number of replicas Make the number of replicas modifiable when the cluster is running New --new-nid storage option for fast cloning qa: fix 2 tests with ZODB5 qa: new tools/stress options to evaluate MySQL engines qa: provide a way to let tests start 1 mysqld per storage node mysql: make 'user' actually optional in the DB connection string mysql: specify column families for RocksDB qa: add testIncremental (testImporter) test importer: fix hidden "maximum recursion depth exceeded" at startup importer: fix closure of ZODB, and also do it when the import is finished sqlite: fix resumption of migration to NEO with Importer qa: fix a random failure in threaded tests importer: speed up startup when the import is already finished importer: fix replication (as source) once import is finished storage: fix DatabaseManager.getLastTID with max_tid qa: remove 2 useless unit tests storage: allow the master to change our node id Rename --uuid command-line options into --nid importer: fix possible data loss on writeback
-
Kirill Smelkov authored
-
Kirill Smelkov authored
This protocol version corresponds to protocol version used by NEO/py v1.11 and was set in NEO/py commit 9a5b46dd (Bump protocol version). The protocol definition was updated to match that NEO/py release in the previous patch.
-
Kirill Smelkov authored
This corresponds to NEO/py commit 64826794 (New neoctl command to flush the logs of all nodes in the cluster).
-
Kirill Smelkov authored
NEO 1.11 * tag 'v1.11': (52 commits) Release version 1.11 Fix short descriptions of neoctl & neomigrate in their headers Update copyright year qa: new tool to stress-test NEO master: fix typo in comment Fix error handling when setting up a listening connector Fix incomplete/incorrect mapping of node ids in logs Fix log corruption on rotation in multi-threaded applications (e.g. client) sqlite: optimize storage of metadata neolog: do not die when a table is corrupted neolog: add support for zstd-compressed logs neolog: do not hardcode default value of -L option in help message fixup! New log format to show node id (and optionally cluster name) in node column New log format to show node id (and optionally cluster name) in node column fixup! client: discard late answers to lockless writes client: fix race condition between Storage.load() and invalidations client: fix race condition in refcounting dispatched answer packets More RTMIN+2 (log) information for clients and connections storage: check for conflicts when notifying that the a partition is replicated storage: clarify several assertions qa: new expectedFailure testcase method client: merge ConnectionPool inside Application client: prepare merge of ConnectionPool inside Application client: fix AssertionError when trying to reconnect too quickly after an error qa: fix attributeTracker storage: fix storage leak when an oid is stored several times within a transaction client: discard late answers to lockless writes qa: in threaded tests, log queued packets when "tic is looping forever" In logs, dump the partition table in a more compact and readable way storage: fix write-locking bug when a deadlock happens at the end of a replication client: log_flush most exceptions raised from Application to ZODB client: fix assertion failure in case of conflict + storage disconnection client: simplify connection management in transaction contexts client: also vote to nodes that only check serials qa: deindent code Bump protocol version client: fix undetected disconnections to storage nodes during commit Fix data corruption due to undetected conflicts after storage failures master: notify replicating nodes of aborted watched transactions New neoctl command to flush the logs of all nodes in the cluster storage: fix premature write-locking during rebase when replication ends client: fix race condition when a storage connection is closed just after identification storage: relax assertion comments, unused import storage: fix write-lock leak client: fix possible corruption in case of network failure with a storage qa: comment about potential freeze when a functional test ends storage: fix assertion failure in case of connection reset with a client node qa: document a rare random failure in testExport debug: add script to trace all accesses to the client cache Use argparse instead of optparse neolog: use argparse instead of optparse Add comment about dormant bug when sending a lot of data to a slow node client: make clearer that max_size attribute is used from outside ClientCache
-
Kirill Smelkov authored
-
Kirill Smelkov authored
This protocol version corresponds to protocol version used by NEO/py v1.10. The protocol definition was updated to match that NEO/py release in the previous patches.
-
Kirill Smelkov authored
This corresponds to NEO/py commit 97af23cc (Maximize resiliency by taking into account the topology of storage nodes).
-
Kirill Smelkov authored
- Rename GetObject .Tid -> .Before - Rename GetObject .Serial -> .At - Sync docstrings This corresponds to NEO/py commit 9f0f2afe (protocol: update packet docstrings).
-
Kirill Smelkov authored
This corresponds to NEO/py commit 52db5607 ("protocol: a single byte is more than enough to encode enums").
-
Kirill Smelkov authored
Don't skip a code when going request1->request2 through `Request1 Answer1 Request2`. For example before this patch: 1 RequestIdentification 1 | answerBit AcceptIdentification 3 Ping 3 | answerBit Pong ... after this patch: 1 RequestIdentification 1 | answerBit AcceptIdentification 2 Ping 2 | answerBit Pong ... This corresponds to NEO/py commit a00ab78b ("protocol: small cleanup in packet registration").
-
Kirill Smelkov authored
Corresponds to NEO/py commit b3dd6973 ("Optimize resumption of replication by starting from a greater TID").
-
Kirill Smelkov authored
Corresponds to NEO/py commit 3efbbfe3 ("master: automatically discard feeding cells that get out-of-date").
-
Kirill Smelkov authored
NEO 1.10 * tag 'v1.10': (55 commits) Release version 1.10 Maximize resiliency by taking into account the topology of storage nodes storage: also commit updated cell TID at each replicated chunk of 'obj' records storage: skip useless work when unlocking transactions qa: flush logs at the end of each test when -L is not used qa: add a log in case that a mysterious bug happens again storage: clarify log about data deletion of discarded cells debug: new example to run the profiler for 1 minute mysql: fix replication of big oids (> 16M) tests/cluster: speedup waiting a bit protocol: update packet docstrings Bump protocol version protocol: a single byte is more than enough to encode enums protocol: small cleanup in packet registration Optimize resumption of replication by starting from a greater TID importer: update comment about a workaround for ZODB3 Micro-optimization of p64/u64 qa: add a log in testBackupNodeLost for easier debugging Document that the bug when checking replicas may also cause the master to crash storage: stop logging 'Abort TXN' for txn that have been locked storage: split _migrate2() for reusable _alterTable() qa: new testStorageUpgrade qa: update testStorageUpgrade data for what is not automatically upgraded qa: original data for the future testStorageUpgrade sqlite: fix indexes of upgraded db importer: fix NameError when recovering during tpc_finish fixup! importer: fetch and process the data to import in a separate process Serialize empty transaction extension with an empty string client: fix partial import from a source storage qa: give a title to subprocesses of functional tests importer: give a title to the 'import' and 'writeback' subprocesses importer: fetch and process the data to import in a separate process importer: new option to write back new transactions to the source database importer: log when the transaction index for FileStorage DB is built importer: open imported zodb in read-only whenever possible fixup! mysql: fix remaining places where a server disconnection was not catched fixup! storage: speed up replication by sending bigger network packets mysql: do not full-scan for duplicates of big oids if deduplication is disabled mysql: fix remaining places where a server disconnection was not catched fixup! Add support for custom compression levels importer: reenable compression by default qa: review testImporter qa: remove a few uses of 'chr' Fix a few issues with ZODB5 importer: small code cleanup in speedupFileStorageTxnLookup patch importer: do not trigger speedupFileStorageTxnLookup uselessly Add support for custom compression levels setup: update MANIFEST.in importer: do not checksum data twice client: store uncompressed if compressed size is equal fixup! master: automatically discard feeding cells that get out-of-date master: automatically discard feeding cells that get out-of-date qa: remove useless indentation in testSafeTweak bench: new option to mesure ZEO perfs in matrix test bench: reduce number of partitions in matrix test storage: fix replication of creation undone
-
- 16 Oct, 2020 1 commit
-
-
Kirill Smelkov authored
ZEO4 does not have msgpack support and does not take $ZEO_MSGPACK into account. With ZEO4 this test was failing before: --- FAIL: TestHandshake (0.46s) --- FAIL: TestHandshake/py/msgpack=true (0.24s) zeo_test.go:241: handshake: encoding=Z ; want M We don't have infrastructure to check python packages versions, so check it by verifying ZEO.asyncio presence.
-
- 12 Oct, 2020 1 commit
-
-
Kirill Smelkov authored
In that case at0 was initialized as 0 and still considered uninitialized by flushEventq0: (neo) (z-dev) (g.env) kirr@deco:~/src/neo/src/lab.nexedi.com/kirr/neo/go/zodb/storage/zeo$ go test -run Empty ------ 2020-10-12T07:39:25 INFO ZEO.runzeo (146240) opening storage '1' using FileStorage ------ 2020-10-12T07:39:25 INFO ZEO.StorageServer StorageServer created RW with storages: 1:RW:/tmp/zeo905263273/1.fs ------ 2020-10-12T07:39:25 INFO ZEO.asyncio.server listening on /tmp/zeo905263273/1.fs.zeosock ------ 2020-10-12T07:39:25 INFO ZEO.asyncio.base Connected server protocol ------ 2020-10-12T07:39:25 INFO ZEO.asyncio.server received handshake 'Z5' 2020/10/12 07:39:25 /tmp/zeo905263273/1.fs.zeosock: EOF --- FAIL: TestEmptyDB (0.22s) --- FAIL: TestEmptyDB/py/msgpack=false (0.22s) panic: flush, but .at0 not yet initialized [recovered] panic: flush, but .at0 not yet initialized goroutine 7 [running]: testing.tRunner.func1.1(0x644a60, 0x6e1a50) /home/kirr/src/tools/go/go/src/testing/testing.go:1072 +0x30d testing.tRunner.func1(0xc000001e00) /home/kirr/src/tools/go/go/src/testing/testing.go:1075 +0x41a panic(0x644a60, 0x6e1a50) /home/kirr/src/tools/go/go/src/runtime/panic.go:969 +0x175 lab.nexedi.com/kirr/neo/go/zodb/storage/zeo.(*zeo).flushEventq0(0xc00018a000) /home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/go/zodb/storage/zeo/zeo.go:180 +0xf3 lab.nexedi.com/kirr/neo/go/zodb/storage/zeo.openByURL(0x6e9ca0, 0xc000016108, 0xc000138120, 0xc000153d98, 0x0, 0x0, 0x0, 0x0, 0x0) /home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/go/zodb/storage/zeo/zeo.go:488 +0x5ba lab.nexedi.com/kirr/neo/go/zodb/storage/zeo.zeoOpen(0xc000018740, 0x1e, 0xc000049d98, 0x0, 0x0, 0x0, 0x0) /home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/go/zodb/storage/zeo/zeo_test.go:285 +0x17b lab.nexedi.com/kirr/neo/go/zodb/storage/zeo.withZEO.func1(0xc000001e00, 0x6e9ea0, 0xc00005e6c0) /home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/go/zodb/storage/zeo/zeo_test.go:219 +0xd0 lab.nexedi.com/kirr/neo/go/zodb/storage/zeo.withZEOSrv.func2.1(0xc0000185c0, 0x16) /home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/go/zodb/storage/zeo/zeo_test.go:205 +0xfb lab.nexedi.com/kirr/neo/go/zodb/storage/zeo.withZEOSrv.func1(0xc000001e00, 0xc00000e5a0) /home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/go/zodb/storage/zeo/zeo_test.go:185 +0x129 lab.nexedi.com/kirr/neo/go/zodb/storage/zeo.withZEOSrv.func2(0xc000001e00) /home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/go/zodb/storage/zeo/zeo_test.go:197 +0x105 testing.tRunner(0xc000001e00, 0xc00000e440) /home/kirr/src/tools/go/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /home/kirr/src/tools/go/go/src/testing/testing.go:1168 +0x2b3 exit status 2 FAIL lab.nexedi.com/kirr/neo/go/zodb/storage/zeo 0.227s -> Fix it by using dedicated field marking whether .at0 was initialized or not yet.
-
- 24 Sep, 2020 1 commit
-
-
Kirill Smelkov authored
For virtio NICs /sys/class/net/<NIC>/device lead to $pcidev/virtioX, not just $pcidev, e.g.: $ realpath /sys/class/net/ens3/device /sys/devices/pci0000:00/0000:00:03.0/virtio0 and we were extracting virtio0 instead of 0000:00:03.0 as PCI device identifier. -> Fix it by recognizing and stripping /virtioX suffix.
-