- 13 Dec, 2017 2 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
So now we are benchmarking disk for sizes 4K and 2M which are the usual sizes popping up with e.g. wendelin.core.
-
- 11 Dec, 2017 4 commits
-
-
Kirill Smelkov authored
* origin/master: client: account for cache hit/miss statistics client: remove redundant information from cache's __repr__ cache: fix possible endless loop in __repr__/_iterQueue storage: speed up replication by not getting object next_serial for nothing storage: speed up replication by sending bigger network packets neoctl: remove ignored option client: bug found, add log to collect more information client: new 'cache-size' Storage option doc: mention HTTPS URLs when possible doc: update comment in neolog about Python issue 13773 neolog: add support for xz-compressed logs, using external xzcat commands neolog: --from option now also tries to parse with dateutil importer: do not crash if a backup cluster tries to replicate storage: disable data deduplication by default Release version 1.8.1
-
Kirill Smelkov authored
This information is handy to see how well cache performs. Amended by Julien Muchembled: - do not abbreviate some existing field names in repr result (asking the user to look at the source code in order to decipher logs is not nice) - hit: change from %.1f to %.3g - hit: hide it completely if nload is 0 - use __future__.division instead of adding more casts to float
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 08 Dec, 2017 1 commit
-
-
Kirill Smelkov authored
On my disk it gives: name time/op deco/disk/randread/direct/4K-min 98.0µs ± 1% deco/disk/randread/direct/4K-avg 104µs ± 0% deco/disk/randread/direct/1M-min 2.90ms ±17% deco/disk/randread/direct/1M-avg 3.55ms ± 0% deco/disk/randread/pagecache/4K-min 227ns ± 1% deco/disk/randread/pagecache/4K-avg 629ns ± 0% deco/disk/randread/pagecache/1M-min 70.8µs ± 7% deco/disk/randread/pagecache/1M-avg 99.4µs ± 1%
-
- 05 Dec, 2017 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 04 Dec, 2017 1 commit
-
-
Julien Muchembled authored
-
- 22 Nov, 2017 1 commit
-
-
Kirill Smelkov authored
-
- 21 Nov, 2017 1 commit
-
-
Julien Muchembled authored
INFO Z2 Log files reopened successfully INFO SignalHandler Caught signal SIGTERM INFO Z2 Shutting down fast INFO ZServer closing HTTP to new connections ERROR ZODB.Connection Couldn't load state for BTrees.LOBTree.LOBucket 0xc12e29 Traceback (most recent call last): File "ZODB/Connection.py", line 909, in setstate self._setstate(obj, oid) File "ZODB/Connection.py", line 953, in _setstate p, serial = self._storage.load(oid, '') File "neo/client/Storage.py", line 81, in load return self.app.load(oid)[:2] File "neo/client/app.py", line 355, in load data, tid, next_tid, _ = self._loadFromStorage(oid, tid, before_tid) File "neo/client/app.py", line 387, in _loadFromStorage askStorage) File "neo/client/app.py", line 297, in _askStorageForRead self.sync() File "neo/client/app.py", line 898, in sync self._askPrimary(Packets.Ping()) File "neo/client/app.py", line 163, in _askPrimary return self._ask(self._getMasterConnection(), packet, File "neo/client/app.py", line 177, in _getMasterConnection result = self.master_conn = self._connectToPrimaryNode() File "neo/client/app.py", line 202, in _connectToPrimaryNode index = (index + 1) % len(master_list) ZeroDivisionError: integer division or modulo by zero
-
- 20 Nov, 2017 1 commit
-
-
Kirill Smelkov authored
-
- 19 Nov, 2017 1 commit
-
-
Julien Muchembled authored
-
- 17 Nov, 2017 4 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 15 Nov, 2017 1 commit
-
-
Julien Muchembled authored
It's not possible yet to replicate a node that is importing data. One must wait that the migration is finished.
-
- 09 Nov, 2017 6 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 08 Nov, 2017 6 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 07 Nov, 2017 5 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
This would prevent e.g. eth1 going before eth0 as it was the case in 55a64368.
-
Julien Muchembled authored
-
Kirill Smelkov authored
-
Julien Muchembled authored
-
- 06 Nov, 2017 4 commits
-
-
Kirill Smelkov authored
; NEO/py log to no-log: $ ./benchstat-neopy-lognolog 20171106-time-rio-Cenabled.txt name old µs/object new µs/object delta dataset:wczblk1-8 rio/neo/py/sqlite/zhash.py 304 ± 6% 291 ± 2% ~ (p=0.056 n=5+5) rio/neo/py/sqlite/zhash.py-P16 2.19k ± 0% 2.01k ± 2% -8.20% (p=0.000 n=13+16) rio/neo/py/sqlite/zhash.go 248 ± 1% 231 ± 1% -7.19% (p=0.008 n=5+5) rio/neo/py/sqlite/zhash.go+prefetch128 125 ± 1% 110 ± 2% -11.57% (p=0.008 n=5+5) rio/neo/py/sqlite/zhash.go-P16 1.76k ±13% 1.62k ± 7% -8.06% (p=0.015 n=16+16) rio/neo/py/sql/zhash.py 325 ± 4% 313 ± 4% ~ (p=0.114 n=4+4) rio/neo/py/sql/zhash.py-P16 2.88k ± 1% 2.56k ± 1% -11.05% (p=0.000 n=15+15) rio/neo/py/sql/zhash.go 275 ± 2% 258 ± 1% -6.03% (p=0.008 n=5+5) rio/neo/py/sql/zhash.go+prefetch128 154 ± 3% 139 ± 1% -9.29% (p=0.008 n=5+5) rio/neo/py/sql/zhash.go-P16 2.30k ± 8% 2.21k ± 5% ~ (p=0.072 n=16+16) dataset:prod1-1024 rio/neo/py/sqlite/zhash.py 269 ± 1% 259 ± 4% -3.49% (p=0.032 n=5+5) rio/neo/py/sqlite/zhash.py-P16 2.19k ± 0% 1.89k ± 1% -13.62% (p=0.000 n=16+15) rio/neo/py/sqlite/zhash.go 158 ± 1% 142 ± 1% -10.36% (p=0.008 n=5+5) rio/neo/py/sqlite/zhash.go+prefetch128 116 ± 3% 101 ± 2% -13.22% (p=0.008 n=5+5) rio/neo/py/sqlite/zhash.go-P16 1.90k ± 0% 1.57k ± 0% -17.14% (p=0.000 n=14+13) rio/neo/py/sql/zhash.py 337 ±43% 293 ± 4% ~ (p=0.286 n=5+4) rio/neo/py/sql/zhash.py-P16 2.73k ± 0% 2.47k ± 0% -9.45% (p=0.000 n=15+15) rio/neo/py/sql/zhash.go 186 ± 3% 168 ± 1% -9.39% (p=0.008 n=5+5) rio/neo/py/sql/zhash.go+prefetch128 145 ± 2% 130 ± 2% -10.24% (p=0.008 n=5+5) rio/neo/py/sql/zhash.go-P16 2.29k ± 6% 2.08k ± 3% -9.20% (p=0.000 n=16+16) -------- ; Full summary $ benchstat -split dataset 20171106-time-rio-Cenabled.txt name pystone/s rio/pystone 178k ± 2% name µs/op rio/sha1/py/1024B 1.40 ± 0% rio/sha1/go/1024B 1.79 ± 1% rio/sha1/py/4096B 5.08 ± 2% rio/sha1/go/4096B 7.14 ± 0% name us/op rio/disk/randread/direct/4K-min 34.0 ± 1% rio/disk/randread/direct/4K-avg 92.9 ± 0% name time/op rio/disk/randread/pagecache/4K-min 221ns ± 0% rio/disk/randread/pagecache/4K-avg 637ns ± 0% name µs/object dataset:wczblk1-8 rio/fs1/zhash.py 22.3 ± 2% rio/fs1/zhash.py-P16 51.7 ±72% rio/fs1/zhash.go 2.40 ± 0% rio/fs1/zhash.go+prefetch128 4.34 ± 8% rio/fs1/zhash.go-P16 3.58 ±24% rio/zeo/zhash.py 336 ± 2% rio/zeo/zhash.py-P16 1.61k ±19% rio/neo/py/sqlite/zhash.py 304 ± 6% rio/neo/py/sqlite/zhash.py-P16 2.19k ± 0% rio/neo/py/sqlite/zhash.go 248 ± 1% rio/neo/py/sqlite/zhash.go+prefetch128 125 ± 1% rio/neo/py/sqlite/zhash.go-P16 1.76k ±13% rio/neo/py(!log)/sqlite/zhash.py 291 ± 2% rio/neo/py(!log)/sqlite/zhash.py-P16 2.01k ± 2% rio/neo/py(!log)/sqlite/zhash.go 231 ± 1% rio/neo/py(!log)/sqlite/zhash.go+prefetch128 110 ± 2% rio/neo/py(!log)/sqlite/zhash.go-P16 1.62k ± 7% rio/neo/py/sql/zhash.py 325 ± 4% rio/neo/py/sql/zhash.py-P16 2.88k ± 1% rio/neo/py/sql/zhash.go 275 ± 2% rio/neo/py/sql/zhash.go+prefetch128 154 ± 3% rio/neo/py/sql/zhash.go-P16 2.30k ± 8% rio/neo/py(!log)/sql/zhash.py 313 ± 4% rio/neo/py(!log)/sql/zhash.py-P16 2.56k ± 1% rio/neo/py(!log)/sql/zhash.go 258 ± 1% rio/neo/py(!log)/sql/zhash.go+prefetch128 139 ± 1% rio/neo/py(!log)/sql/zhash.go-P16 2.21k ± 5% rio/neo/go/zhash.py 190 ± 3% rio/neo/go/zhash.py-P16 784 ± 9% rio/neo/go/zhash.go 52.0 ± 1% rio/neo/go/zhash.go+prefetch128 26.6 ± 5% rio/neo/go/zhash.go-P16 256 ± 6% rio/neo/go(!sha1)/zhash.go 35.3 ± 4% rio/neo/go(!sha1)/zhash.go+prefetch128 17.3 ± 2% rio/neo/go(!sha1)/zhash.go-P16 152 ±13% dataset:prod1-1024 rio/fs1/zhash.py 18.9 ± 1% rio/fs1/zhash.py-P16 58.0 ±52% rio/fs1/zhash.go 1.30 ± 0% rio/fs1/zhash.go+prefetch128 2.78 ±14% rio/fs1/zhash.go-P16 2.21 ± 9% rio/zeo/zhash.py 302 ± 7% rio/zeo/zhash.py-P16 1.44k ±11% rio/neo/py/sqlite/zhash.py 269 ± 1% rio/neo/py/sqlite/zhash.py-P16 2.19k ± 0% rio/neo/py/sqlite/zhash.go 158 ± 1% rio/neo/py/sqlite/zhash.go+prefetch128 116 ± 3% rio/neo/py/sqlite/zhash.go-P16 1.90k ± 0% rio/neo/py(!log)/sqlite/zhash.py 259 ± 4% rio/neo/py(!log)/sqlite/zhash.py-P16 1.89k ± 1% rio/neo/py(!log)/sqlite/zhash.go 142 ± 1% rio/neo/py(!log)/sqlite/zhash.go+prefetch128 101 ± 2% rio/neo/py(!log)/sqlite/zhash.go-P16 1.57k ± 0% rio/neo/py/sql/zhash.py 337 ±43% rio/neo/py/sql/zhash.py-P16 2.73k ± 0% rio/neo/py/sql/zhash.go 186 ± 3% rio/neo/py/sql/zhash.go+prefetch128 145 ± 2% rio/neo/py/sql/zhash.go-P16 2.29k ± 6% rio/neo/py(!log)/sql/zhash.py 293 ± 4% rio/neo/py(!log)/sql/zhash.py-P16 2.47k ± 0% rio/neo/py(!log)/sql/zhash.go 168 ± 1% rio/neo/py(!log)/sql/zhash.go+prefetch128 130 ± 2% rio/neo/py(!log)/sql/zhash.go-P16 2.08k ± 3% rio/neo/go/zhash.py 181 ± 5% rio/neo/go/zhash.py-P16 714 ± 6% rio/neo/go/zhash.go 36.9 ± 3% rio/neo/go/zhash.go+prefetch128 16.5 ± 1% rio/neo/go/zhash.go-P16 239 ± 4% rio/neo/go(!sha1)/zhash.go 32.7 ± 7% rio/neo/go(!sha1)/zhash.go+prefetch128 13.5 ± 1% rio/neo/go(!sha1)/zhash.go-P16 190 ± 7%
-
Kirill Smelkov authored
; NEO/py log to no-log: $ ./benchstat-neopy-lognolog 20171106-time-z6001-Cenabled.txt name old µs/object new µs/object delta dataset:wczblk1-8 z6001/neo/py/sqlite/zhash.py 793 ± 5% 714 ±16% -9.92% (p=0.032 n=5+5) z6001/neo/py/sqlite/zhash.py-P16 3.20k ± 1% 2.81k ± 1% -12.32% (p=0.000 n=16+16) z6001/neo/py/sqlite/zhash.go 517 ± 5% 486 ± 5% -6.08% (p=0.032 n=5+5) z6001/neo/py/sqlite/zhash.go+prefetch128 211 ± 4% 179 ± 2% -15.42% (p=0.008 n=5+5) z6001/neo/py/sqlite/zhash.go-P16 2.93k ± 6% 2.53k ±10% -13.76% (p=0.000 n=16+16) z6001/neo/py/sql/zhash.py 1.14k ±25% 1.03k ±36% ~ (p=0.151 n=5+5) z6001/neo/py/sql/zhash.py-P16 5.34k ± 0% 4.46k ± 1% -16.40% (p=0.000 n=13+16) z6001/neo/py/sql/zhash.go 668 ± 9% 624 ± 5% ~ (p=0.095 n=5+5) z6001/neo/py/sql/zhash.go+prefetch128 311 ± 5% 279 ± 4% -10.42% (p=0.008 n=5+5) z6001/neo/py/sql/zhash.go-P16 4.63k ± 2% 4.18k ± 4% -9.84% (p=0.000 n=16+14) dataset:prod1-1024 z6001/neo/py/sqlite/zhash.py 711 ± 6% 698 ± 5% ~ (p=0.548 n=5+5) z6001/neo/py/sqlite/zhash.py-P16 2.93k ± 0% 2.75k ± 0% -6.20% (p=0.000 n=16+16) z6001/neo/py/sqlite/zhash.go 351 ± 3% 322 ± 3% -8.24% (p=0.008 n=5+5) z6001/neo/py/sqlite/zhash.go+prefetch128 189 ± 3% 166 ± 1% -12.10% (p=0.008 n=5+5) z6001/neo/py/sqlite/zhash.go-P16 2.83k ± 3% 2.52k ± 4% -11.00% (p=0.000 n=16+16) z6001/neo/py/sql/zhash.py 894 ± 1% 982 ±30% ~ (p=0.190 n=4+5) z6001/neo/py/sql/zhash.py-P16 4.89k ± 0% 4.26k ± 1% -12.93% (p=0.000 n=16+16) z6001/neo/py/sql/zhash.go 434 ± 4% 405 ± 2% -6.77% (p=0.008 n=5+5) z6001/neo/py/sql/zhash.go+prefetch128 303 ± 3% 267 ±10% -11.86% (p=0.008 n=5+5) z6001/neo/py/sql/zhash.go-P16 4.66k ± 2% 4.72k ± 0% +1.39% (p=0.000 n=16+11) -------- ; Comparing to prvious localhost run on z6001 ; (NOTE it was with C-states disabled then) $ benchstat -split dataset time-soct17-z6001.txt 20171106-time-z6001-Cenabled.txt name old pystone/s new pystone/s delta z6001/pystone 121k ± 1% 110k ± 1% -9.28% (p=0.008 n=5+5) name old µs/op new µs/op delta z6001/sha1/py/1024B 2.26 ± 3% 2.43 ± 2% +7.72% (p=0.008 n=5+5) z6001/sha1/go/1024B 2.35 ± 0% 2.51 ± 0% +6.78% (p=0.008 n=5+5) z6001/sha1/py/4096B 7.87 ± 1% 8.57 ± 0% +8.80% (p=0.008 n=5+5) z6001/sha1/go/4096B 9.37 ± 0% 10.01 ± 0% +6.86% (p=0.008 n=5+5) name old us/op new us/op delta z6001/disk/randread/direct/4K-min 122 ± 1% 123 ± 1% ~ (p=0.143 n=5+4) z6001/disk/randread/direct/4K-avg 125 ± 0% 143 ±14% +13.98% (p=0.016 n=4+5) name old time/op new time/op delta z6001/disk/randread/pagecache/4K-min 409ns ± 1% 406ns ± 4% ~ (p=0.690 n=5+5) z6001/disk/randread/pagecache/4K-avg 938ns ± 0% 980ns ± 1% ~ (p=0.095 n=2+5) name old µs/object new µs/object delta dataset:wczblk1-8 z6001/fs1/zhash.py 35.6 ± 0% 38.8 ± 1% +8.92% (p=0.008 n=5+5) z6001/fs1/zhash.py-P16 43.9 ±21% 39.2 ± 2% -10.74% (p=0.003 n=16+16) z6001/fs1/zhash.go 3.76 ± 2% 4.08 ± 4% +8.51% (p=0.008 n=5+5) z6001/fs1/zhash.go+prefetch128 7.36 ± 6% 9.42 ±12% +27.99% (p=0.008 n=5+5) z6001/fs1/zhash.go-P16 4.53 ±24% 4.89 ±27% ~ (p=0.234 n=16+16) z6001/zeo/zhash.py 488 ± 1% 612 ± 4% +25.41% (p=0.008 n=5+5) z6001/zeo/zhash.py-P16 1.71k ±17% 1.86k ±14% +9.00% (p=0.026 n=16+16) z6001/neo/py/sqlite/zhash.py 486 ± 4% 793 ± 5% +63.22% (p=0.008 n=5+5) z6001/neo/py/sqlite/zhash.py-P16 3.16k ± 1% 3.20k ± 1% +1.25% (p=0.000 n=15+16) z6001/neo/py/sqlite/zhash.go 371 ± 1% 517 ± 5% +39.56% (p=0.008 n=5+5) z6001/neo/py/sqlite/zhash.go+prefetch128 198 ± 2% 211 ± 4% +6.90% (p=0.008 n=5+5) z6001/neo/py/sqlite/zhash.go-P16 2.85k ± 6% 2.93k ± 6% +2.91% (p=0.005 n=16+16) z6001/neo/py/sql/zhash.py 528 ± 2% 1139 ±25% +115.70% (p=0.016 n=4+5) z6001/neo/py/sql/zhash.py-P16 3.95k ± 1% 5.34k ± 0% +35.04% (p=0.000 n=16+13) z6001/neo/py/sql/zhash.go 415 ± 0% 668 ± 9% +61.01% (p=0.008 n=5+5) z6001/neo/py/sql/zhash.go+prefetch128 246 ± 2% 311 ± 5% +26.53% (p=0.008 n=5+5) z6001/neo/py/sql/zhash.go-P16 3.79k ± 4% 4.63k ± 2% +22.37% (p=0.000 n=16+16) z6001/neo/go/zhash.py 287 ± 7% 475 ±15% +65.45% (p=0.008 n=5+5) z6001/neo/go/zhash.py-P16 477 ± 1% 498 ± 2% +4.46% (p=0.000 n=15+16) z6001/neo/go/zhash.go 78.7 ± 5% 90.7 ± 8% +15.27% (p=0.008 n=5+5) z6001/neo/go/zhash.go+prefetch128 36.7 ± 2% 41.8 ± 8% +13.94% (p=0.008 n=5+5) z6001/neo/go/zhash.go-P16 123 ±23% 128 ±21% +3.62% (p=0.036 n=16+16) z6001/neo/go(!sha1)/zhash.go 56.4 ±10% 63.8 ±11% ~ (p=0.056 n=5+5) z6001/neo/go(!sha1)/zhash.go+prefetch128 27.7 ± 2% 30.0 ± 2% +8.22% (p=0.008 n=5+5) z6001/neo/go(!sha1)/zhash.go-P16 111 ±29% 109 ±30% ~ (p=0.356 n=16+16) dataset:prod1-1024 z6001/fs1/zhash.py 29.8 ± 1% 32.4 ± 0% +8.45% (p=0.008 n=5+5) z6001/fs1/zhash.py-P16 37.8 ±28% 33.1 ± 5% -12.44% (p=0.002 n=16+14) z6001/fs1/zhash.go 2.14 ± 3% 2.40 ± 0% +12.15% (p=0.016 n=5+4) z6001/fs1/zhash.go+prefetch128 4.20 ± 7% 4.88 ± 2% +16.19% (p=0.008 n=5+5) z6001/fs1/zhash.go-P16 2.77 ±24% 3.07 ±25% +10.59% (p=0.018 n=16+16) z6001/zeo/zhash.py 470 ± 5% 570 ±11% +21.07% (p=0.008 n=5+5) z6001/zeo/zhash.py-P16 1.51k ±19% 1.58k ±20% ~ (p=0.275 n=16+16) z6001/neo/py/sqlite/zhash.py 427 ± 3% 711 ± 6% +66.59% (p=0.008 n=5+5) z6001/neo/py/sqlite/zhash.py-P16 2.96k ± 1% 2.93k ± 0% -1.02% (p=0.000 n=16+16) z6001/neo/py/sqlite/zhash.go 244 ± 0% 351 ± 3% +43.97% (p=0.008 n=5+5) z6001/neo/py/sqlite/zhash.go+prefetch128 174 ± 2% 189 ± 3% +8.17% (p=0.008 n=5+5) z6001/neo/py/sqlite/zhash.go-P16 2.76k ± 2% 2.83k ± 3% +2.61% (p=0.009 n=16+16) z6001/neo/py/sql/zhash.py 477 ± 3% 894 ± 1% +87.27% (p=0.029 n=4+4) z6001/neo/py/sql/zhash.py-P16 3.95k ± 0% 4.89k ± 0% +23.96% (p=0.000 n=15+16) z6001/neo/py/sql/zhash.go 299 ± 1% 434 ± 4% +45.15% (p=0.008 n=5+5) z6001/neo/py/sql/zhash.go+prefetch128 230 ± 0% 303 ± 3% +31.65% (p=0.016 n=4+5) z6001/neo/py/sql/zhash.go-P16 3.62k ± 2% 4.66k ± 2% +28.58% (p=0.000 n=16+16) z6001/neo/go/zhash.py 272 ±10% 359 ±11% +32.21% (p=0.008 n=5+5) z6001/neo/go/zhash.py-P16 464 ± 1% 479 ± 2% +3.35% (p=0.000 n=15+16) z6001/neo/go/zhash.go 58.2 ± 2% 68.5 ± 3% +17.67% (p=0.008 n=5+5) z6001/neo/go/zhash.go+prefetch128 24.1 ± 0% 25.8 ± 2% +6.97% (p=0.008 n=5+5) z6001/neo/go/zhash.go-P16 141 ±48% 150 ±47% +6.31% (p=0.016 n=16+16) z6001/neo/go(!sha1)/zhash.go 48.6 ± 3% 56.0 ± 8% +15.36% (p=0.008 n=5+5) z6001/neo/go(!sha1)/zhash.go+prefetch128 22.1 ± 1% 22.8 ± 2% +3.26% (p=0.016 n=5+5) z6001/neo/go(!sha1)/zhash.go-P16 131 ±44% 136 ±43% +4.29% (p=0.009 n=16+16) -------- ; Full summary $ benchstat -split dataset 20171106-time-z6001-Cenabled.txt name pystone/s z6001/pystone 110k ± 1% name µs/op z6001/sha1/py/1024B 2.43 ± 2% z6001/sha1/go/1024B 2.51 ± 0% z6001/sha1/py/4096B 8.57 ± 0% z6001/sha1/go/4096B 10.0 ± 0% name us/op z6001/disk/randread/direct/4K-min 123 ± 1% z6001/disk/randread/direct/4K-avg 143 ±14% name time/op z6001/disk/randread/pagecache/4K-min 406ns ± 4% z6001/disk/randread/pagecache/4K-avg 980ns ± 1% name µs/object dataset:wczblk1-8 z6001/fs1/zhash.py 38.8 ± 1% z6001/fs1/zhash.py-P16 39.2 ± 2% z6001/fs1/zhash.go 4.08 ± 4% z6001/fs1/zhash.go+prefetch128 9.42 ±12% z6001/fs1/zhash.go-P16 4.89 ±27% z6001/zeo/zhash.py 612 ± 4% z6001/zeo/zhash.py-P16 1.86k ±14% z6001/neo/py/sqlite/zhash.py 793 ± 5% z6001/neo/py/sqlite/zhash.py-P16 3.20k ± 1% z6001/neo/py/sqlite/zhash.go 517 ± 5% z6001/neo/py/sqlite/zhash.go+prefetch128 211 ± 4% z6001/neo/py/sqlite/zhash.go-P16 2.93k ± 6% z6001/neo/py(!log)/sqlite/zhash.py 714 ±16% z6001/neo/py(!log)/sqlite/zhash.py-P16 2.81k ± 1% z6001/neo/py(!log)/sqlite/zhash.go 486 ± 5% z6001/neo/py(!log)/sqlite/zhash.go+prefetch128 179 ± 2% z6001/neo/py(!log)/sqlite/zhash.go-P16 2.53k ±10% z6001/neo/py/sql/zhash.py 1.14k ±25% z6001/neo/py/sql/zhash.py-P16 5.34k ± 0% z6001/neo/py/sql/zhash.go 668 ± 9% z6001/neo/py/sql/zhash.go+prefetch128 311 ± 5% z6001/neo/py/sql/zhash.go-P16 4.63k ± 2% z6001/neo/py(!log)/sql/zhash.py 1.03k ±36% z6001/neo/py(!log)/sql/zhash.py-P16 4.46k ± 1% z6001/neo/py(!log)/sql/zhash.go 624 ± 5% z6001/neo/py(!log)/sql/zhash.go+prefetch128 279 ± 4% z6001/neo/py(!log)/sql/zhash.go-P16 4.18k ± 4% z6001/neo/go/zhash.py 475 ±15% z6001/neo/go/zhash.py-P16 498 ± 2% z6001/neo/go/zhash.go 90.7 ± 8% z6001/neo/go/zhash.go+prefetch128 41.8 ± 8% z6001/neo/go/zhash.go-P16 128 ±21% z6001/neo/go(!sha1)/zhash.go 63.8 ±11% z6001/neo/go(!sha1)/zhash.go+prefetch128 30.0 ± 2% z6001/neo/go(!sha1)/zhash.go-P16 109 ±30% dataset:prod1-1024 z6001/fs1/zhash.py 32.4 ± 0% z6001/fs1/zhash.py-P16 33.1 ± 5% z6001/fs1/zhash.go 2.40 ± 0% z6001/fs1/zhash.go+prefetch128 4.88 ± 2% z6001/fs1/zhash.go-P16 3.07 ±25% z6001/zeo/zhash.py 570 ±11% z6001/zeo/zhash.py-P16 1.58k ±20% z6001/neo/py/sqlite/zhash.py 711 ± 6% z6001/neo/py/sqlite/zhash.py-P16 2.93k ± 0% z6001/neo/py/sqlite/zhash.go 351 ± 3% z6001/neo/py/sqlite/zhash.go+prefetch128 189 ± 3% z6001/neo/py/sqlite/zhash.go-P16 2.83k ± 3% z6001/neo/py(!log)/sqlite/zhash.py 698 ± 5% z6001/neo/py(!log)/sqlite/zhash.py-P16 2.75k ± 0% z6001/neo/py(!log)/sqlite/zhash.go 322 ± 3% z6001/neo/py(!log)/sqlite/zhash.go+prefetch128 166 ± 1% z6001/neo/py(!log)/sqlite/zhash.go-P16 2.52k ± 4% z6001/neo/py/sql/zhash.py 894 ± 1% z6001/neo/py/sql/zhash.py-P16 4.89k ± 0% z6001/neo/py/sql/zhash.go 434 ± 4% z6001/neo/py/sql/zhash.go+prefetch128 303 ± 3% z6001/neo/py/sql/zhash.go-P16 4.66k ± 2% z6001/neo/py(!log)/sql/zhash.py 982 ±30% z6001/neo/py(!log)/sql/zhash.py-P16 4.26k ± 1% z6001/neo/py(!log)/sql/zhash.go 405 ± 2% z6001/neo/py(!log)/sql/zhash.go+prefetch128 267 ±10% z6001/neo/py(!log)/sql/zhash.go-P16 4.72k ± 0% z6001/neo/go/zhash.py 359 ±11% z6001/neo/go/zhash.py-P16 479 ± 2% z6001/neo/go/zhash.go 68.5 ± 3% z6001/neo/go/zhash.go+prefetch128 25.8 ± 2% z6001/neo/go/zhash.go-P16 150 ±47% z6001/neo/go(!sha1)/zhash.go 56.0 ± 8% z6001/neo/go(!sha1)/zhash.go+prefetch128 22.8 ± 2% z6001/neo/go(!sha1)/zhash.go-P16 136 ±43%
-
Kirill Smelkov authored
See previous commit for context.
-
Kirill Smelkov authored
NEO/py currently logs every packet internally in RAM buffer to flush on SIGRTMIN if such a request comes. This might be adding overhead, so Julien asked to also have numbers for NEO/py server with logging disabled. We already don't enable logging on NEO/py client since to enable it a ?logfile query needs to be passed to ZODB URL and we don't do so anywhere.
-