- 07 Nov, 2017 3 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
This would prevent e.g. eth1 going before eth0 as it was the case in 55a64368.
-
Kirill Smelkov authored
-
- 06 Nov, 2017 5 commits
-
-
Kirill Smelkov authored
; NEO/py log to no-log: $ ./benchstat-neopy-lognolog 20171106-time-rio-Cenabled.txt name old µs/object new µs/object delta dataset:wczblk1-8 rio/neo/py/sqlite/zhash.py 304 ± 6% 291 ± 2% ~ (p=0.056 n=5+5) rio/neo/py/sqlite/zhash.py-P16 2.19k ± 0% 2.01k ± 2% -8.20% (p=0.000 n=13+16) rio/neo/py/sqlite/zhash.go 248 ± 1% 231 ± 1% -7.19% (p=0.008 n=5+5) rio/neo/py/sqlite/zhash.go+prefetch128 125 ± 1% 110 ± 2% -11.57% (p=0.008 n=5+5) rio/neo/py/sqlite/zhash.go-P16 1.76k ±13% 1.62k ± 7% -8.06% (p=0.015 n=16+16) rio/neo/py/sql/zhash.py 325 ± 4% 313 ± 4% ~ (p=0.114 n=4+4) rio/neo/py/sql/zhash.py-P16 2.88k ± 1% 2.56k ± 1% -11.05% (p=0.000 n=15+15) rio/neo/py/sql/zhash.go 275 ± 2% 258 ± 1% -6.03% (p=0.008 n=5+5) rio/neo/py/sql/zhash.go+prefetch128 154 ± 3% 139 ± 1% -9.29% (p=0.008 n=5+5) rio/neo/py/sql/zhash.go-P16 2.30k ± 8% 2.21k ± 5% ~ (p=0.072 n=16+16) dataset:prod1-1024 rio/neo/py/sqlite/zhash.py 269 ± 1% 259 ± 4% -3.49% (p=0.032 n=5+5) rio/neo/py/sqlite/zhash.py-P16 2.19k ± 0% 1.89k ± 1% -13.62% (p=0.000 n=16+15) rio/neo/py/sqlite/zhash.go 158 ± 1% 142 ± 1% -10.36% (p=0.008 n=5+5) rio/neo/py/sqlite/zhash.go+prefetch128 116 ± 3% 101 ± 2% -13.22% (p=0.008 n=5+5) rio/neo/py/sqlite/zhash.go-P16 1.90k ± 0% 1.57k ± 0% -17.14% (p=0.000 n=14+13) rio/neo/py/sql/zhash.py 337 ±43% 293 ± 4% ~ (p=0.286 n=5+4) rio/neo/py/sql/zhash.py-P16 2.73k ± 0% 2.47k ± 0% -9.45% (p=0.000 n=15+15) rio/neo/py/sql/zhash.go 186 ± 3% 168 ± 1% -9.39% (p=0.008 n=5+5) rio/neo/py/sql/zhash.go+prefetch128 145 ± 2% 130 ± 2% -10.24% (p=0.008 n=5+5) rio/neo/py/sql/zhash.go-P16 2.29k ± 6% 2.08k ± 3% -9.20% (p=0.000 n=16+16) -------- ; Full summary $ benchstat -split dataset 20171106-time-rio-Cenabled.txt name pystone/s rio/pystone 178k ± 2% name µs/op rio/sha1/py/1024B 1.40 ± 0% rio/sha1/go/1024B 1.79 ± 1% rio/sha1/py/4096B 5.08 ± 2% rio/sha1/go/4096B 7.14 ± 0% name us/op rio/disk/randread/direct/4K-min 34.0 ± 1% rio/disk/randread/direct/4K-avg 92.9 ± 0% name time/op rio/disk/randread/pagecache/4K-min 221ns ± 0% rio/disk/randread/pagecache/4K-avg 637ns ± 0% name µs/object dataset:wczblk1-8 rio/fs1/zhash.py 22.3 ± 2% rio/fs1/zhash.py-P16 51.7 ±72% rio/fs1/zhash.go 2.40 ± 0% rio/fs1/zhash.go+prefetch128 4.34 ± 8% rio/fs1/zhash.go-P16 3.58 ±24% rio/zeo/zhash.py 336 ± 2% rio/zeo/zhash.py-P16 1.61k ±19% rio/neo/py/sqlite/zhash.py 304 ± 6% rio/neo/py/sqlite/zhash.py-P16 2.19k ± 0% rio/neo/py/sqlite/zhash.go 248 ± 1% rio/neo/py/sqlite/zhash.go+prefetch128 125 ± 1% rio/neo/py/sqlite/zhash.go-P16 1.76k ±13% rio/neo/py(!log)/sqlite/zhash.py 291 ± 2% rio/neo/py(!log)/sqlite/zhash.py-P16 2.01k ± 2% rio/neo/py(!log)/sqlite/zhash.go 231 ± 1% rio/neo/py(!log)/sqlite/zhash.go+prefetch128 110 ± 2% rio/neo/py(!log)/sqlite/zhash.go-P16 1.62k ± 7% rio/neo/py/sql/zhash.py 325 ± 4% rio/neo/py/sql/zhash.py-P16 2.88k ± 1% rio/neo/py/sql/zhash.go 275 ± 2% rio/neo/py/sql/zhash.go+prefetch128 154 ± 3% rio/neo/py/sql/zhash.go-P16 2.30k ± 8% rio/neo/py(!log)/sql/zhash.py 313 ± 4% rio/neo/py(!log)/sql/zhash.py-P16 2.56k ± 1% rio/neo/py(!log)/sql/zhash.go 258 ± 1% rio/neo/py(!log)/sql/zhash.go+prefetch128 139 ± 1% rio/neo/py(!log)/sql/zhash.go-P16 2.21k ± 5% rio/neo/go/zhash.py 190 ± 3% rio/neo/go/zhash.py-P16 784 ± 9% rio/neo/go/zhash.go 52.0 ± 1% rio/neo/go/zhash.go+prefetch128 26.6 ± 5% rio/neo/go/zhash.go-P16 256 ± 6% rio/neo/go(!sha1)/zhash.go 35.3 ± 4% rio/neo/go(!sha1)/zhash.go+prefetch128 17.3 ± 2% rio/neo/go(!sha1)/zhash.go-P16 152 ±13% dataset:prod1-1024 rio/fs1/zhash.py 18.9 ± 1% rio/fs1/zhash.py-P16 58.0 ±52% rio/fs1/zhash.go 1.30 ± 0% rio/fs1/zhash.go+prefetch128 2.78 ±14% rio/fs1/zhash.go-P16 2.21 ± 9% rio/zeo/zhash.py 302 ± 7% rio/zeo/zhash.py-P16 1.44k ±11% rio/neo/py/sqlite/zhash.py 269 ± 1% rio/neo/py/sqlite/zhash.py-P16 2.19k ± 0% rio/neo/py/sqlite/zhash.go 158 ± 1% rio/neo/py/sqlite/zhash.go+prefetch128 116 ± 3% rio/neo/py/sqlite/zhash.go-P16 1.90k ± 0% rio/neo/py(!log)/sqlite/zhash.py 259 ± 4% rio/neo/py(!log)/sqlite/zhash.py-P16 1.89k ± 1% rio/neo/py(!log)/sqlite/zhash.go 142 ± 1% rio/neo/py(!log)/sqlite/zhash.go+prefetch128 101 ± 2% rio/neo/py(!log)/sqlite/zhash.go-P16 1.57k ± 0% rio/neo/py/sql/zhash.py 337 ±43% rio/neo/py/sql/zhash.py-P16 2.73k ± 0% rio/neo/py/sql/zhash.go 186 ± 3% rio/neo/py/sql/zhash.go+prefetch128 145 ± 2% rio/neo/py/sql/zhash.go-P16 2.29k ± 6% rio/neo/py(!log)/sql/zhash.py 293 ± 4% rio/neo/py(!log)/sql/zhash.py-P16 2.47k ± 0% rio/neo/py(!log)/sql/zhash.go 168 ± 1% rio/neo/py(!log)/sql/zhash.go+prefetch128 130 ± 2% rio/neo/py(!log)/sql/zhash.go-P16 2.08k ± 3% rio/neo/go/zhash.py 181 ± 5% rio/neo/go/zhash.py-P16 714 ± 6% rio/neo/go/zhash.go 36.9 ± 3% rio/neo/go/zhash.go+prefetch128 16.5 ± 1% rio/neo/go/zhash.go-P16 239 ± 4% rio/neo/go(!sha1)/zhash.go 32.7 ± 7% rio/neo/go(!sha1)/zhash.go+prefetch128 13.5 ± 1% rio/neo/go(!sha1)/zhash.go-P16 190 ± 7%
-
Kirill Smelkov authored
; NEO/py log to no-log: $ ./benchstat-neopy-lognolog 20171106-time-z6001-Cenabled.txt name old µs/object new µs/object delta dataset:wczblk1-8 z6001/neo/py/sqlite/zhash.py 793 ± 5% 714 ±16% -9.92% (p=0.032 n=5+5) z6001/neo/py/sqlite/zhash.py-P16 3.20k ± 1% 2.81k ± 1% -12.32% (p=0.000 n=16+16) z6001/neo/py/sqlite/zhash.go 517 ± 5% 486 ± 5% -6.08% (p=0.032 n=5+5) z6001/neo/py/sqlite/zhash.go+prefetch128 211 ± 4% 179 ± 2% -15.42% (p=0.008 n=5+5) z6001/neo/py/sqlite/zhash.go-P16 2.93k ± 6% 2.53k ±10% -13.76% (p=0.000 n=16+16) z6001/neo/py/sql/zhash.py 1.14k ±25% 1.03k ±36% ~ (p=0.151 n=5+5) z6001/neo/py/sql/zhash.py-P16 5.34k ± 0% 4.46k ± 1% -16.40% (p=0.000 n=13+16) z6001/neo/py/sql/zhash.go 668 ± 9% 624 ± 5% ~ (p=0.095 n=5+5) z6001/neo/py/sql/zhash.go+prefetch128 311 ± 5% 279 ± 4% -10.42% (p=0.008 n=5+5) z6001/neo/py/sql/zhash.go-P16 4.63k ± 2% 4.18k ± 4% -9.84% (p=0.000 n=16+14) dataset:prod1-1024 z6001/neo/py/sqlite/zhash.py 711 ± 6% 698 ± 5% ~ (p=0.548 n=5+5) z6001/neo/py/sqlite/zhash.py-P16 2.93k ± 0% 2.75k ± 0% -6.20% (p=0.000 n=16+16) z6001/neo/py/sqlite/zhash.go 351 ± 3% 322 ± 3% -8.24% (p=0.008 n=5+5) z6001/neo/py/sqlite/zhash.go+prefetch128 189 ± 3% 166 ± 1% -12.10% (p=0.008 n=5+5) z6001/neo/py/sqlite/zhash.go-P16 2.83k ± 3% 2.52k ± 4% -11.00% (p=0.000 n=16+16) z6001/neo/py/sql/zhash.py 894 ± 1% 982 ±30% ~ (p=0.190 n=4+5) z6001/neo/py/sql/zhash.py-P16 4.89k ± 0% 4.26k ± 1% -12.93% (p=0.000 n=16+16) z6001/neo/py/sql/zhash.go 434 ± 4% 405 ± 2% -6.77% (p=0.008 n=5+5) z6001/neo/py/sql/zhash.go+prefetch128 303 ± 3% 267 ±10% -11.86% (p=0.008 n=5+5) z6001/neo/py/sql/zhash.go-P16 4.66k ± 2% 4.72k ± 0% +1.39% (p=0.000 n=16+11) -------- ; Comparing to prvious localhost run on z6001 ; (NOTE it was with C-states disabled then) $ benchstat -split dataset time-soct17-z6001.txt 20171106-time-z6001-Cenabled.txt name old pystone/s new pystone/s delta z6001/pystone 121k ± 1% 110k ± 1% -9.28% (p=0.008 n=5+5) name old µs/op new µs/op delta z6001/sha1/py/1024B 2.26 ± 3% 2.43 ± 2% +7.72% (p=0.008 n=5+5) z6001/sha1/go/1024B 2.35 ± 0% 2.51 ± 0% +6.78% (p=0.008 n=5+5) z6001/sha1/py/4096B 7.87 ± 1% 8.57 ± 0% +8.80% (p=0.008 n=5+5) z6001/sha1/go/4096B 9.37 ± 0% 10.01 ± 0% +6.86% (p=0.008 n=5+5) name old us/op new us/op delta z6001/disk/randread/direct/4K-min 122 ± 1% 123 ± 1% ~ (p=0.143 n=5+4) z6001/disk/randread/direct/4K-avg 125 ± 0% 143 ±14% +13.98% (p=0.016 n=4+5) name old time/op new time/op delta z6001/disk/randread/pagecache/4K-min 409ns ± 1% 406ns ± 4% ~ (p=0.690 n=5+5) z6001/disk/randread/pagecache/4K-avg 938ns ± 0% 980ns ± 1% ~ (p=0.095 n=2+5) name old µs/object new µs/object delta dataset:wczblk1-8 z6001/fs1/zhash.py 35.6 ± 0% 38.8 ± 1% +8.92% (p=0.008 n=5+5) z6001/fs1/zhash.py-P16 43.9 ±21% 39.2 ± 2% -10.74% (p=0.003 n=16+16) z6001/fs1/zhash.go 3.76 ± 2% 4.08 ± 4% +8.51% (p=0.008 n=5+5) z6001/fs1/zhash.go+prefetch128 7.36 ± 6% 9.42 ±12% +27.99% (p=0.008 n=5+5) z6001/fs1/zhash.go-P16 4.53 ±24% 4.89 ±27% ~ (p=0.234 n=16+16) z6001/zeo/zhash.py 488 ± 1% 612 ± 4% +25.41% (p=0.008 n=5+5) z6001/zeo/zhash.py-P16 1.71k ±17% 1.86k ±14% +9.00% (p=0.026 n=16+16) z6001/neo/py/sqlite/zhash.py 486 ± 4% 793 ± 5% +63.22% (p=0.008 n=5+5) z6001/neo/py/sqlite/zhash.py-P16 3.16k ± 1% 3.20k ± 1% +1.25% (p=0.000 n=15+16) z6001/neo/py/sqlite/zhash.go 371 ± 1% 517 ± 5% +39.56% (p=0.008 n=5+5) z6001/neo/py/sqlite/zhash.go+prefetch128 198 ± 2% 211 ± 4% +6.90% (p=0.008 n=5+5) z6001/neo/py/sqlite/zhash.go-P16 2.85k ± 6% 2.93k ± 6% +2.91% (p=0.005 n=16+16) z6001/neo/py/sql/zhash.py 528 ± 2% 1139 ±25% +115.70% (p=0.016 n=4+5) z6001/neo/py/sql/zhash.py-P16 3.95k ± 1% 5.34k ± 0% +35.04% (p=0.000 n=16+13) z6001/neo/py/sql/zhash.go 415 ± 0% 668 ± 9% +61.01% (p=0.008 n=5+5) z6001/neo/py/sql/zhash.go+prefetch128 246 ± 2% 311 ± 5% +26.53% (p=0.008 n=5+5) z6001/neo/py/sql/zhash.go-P16 3.79k ± 4% 4.63k ± 2% +22.37% (p=0.000 n=16+16) z6001/neo/go/zhash.py 287 ± 7% 475 ±15% +65.45% (p=0.008 n=5+5) z6001/neo/go/zhash.py-P16 477 ± 1% 498 ± 2% +4.46% (p=0.000 n=15+16) z6001/neo/go/zhash.go 78.7 ± 5% 90.7 ± 8% +15.27% (p=0.008 n=5+5) z6001/neo/go/zhash.go+prefetch128 36.7 ± 2% 41.8 ± 8% +13.94% (p=0.008 n=5+5) z6001/neo/go/zhash.go-P16 123 ±23% 128 ±21% +3.62% (p=0.036 n=16+16) z6001/neo/go(!sha1)/zhash.go 56.4 ±10% 63.8 ±11% ~ (p=0.056 n=5+5) z6001/neo/go(!sha1)/zhash.go+prefetch128 27.7 ± 2% 30.0 ± 2% +8.22% (p=0.008 n=5+5) z6001/neo/go(!sha1)/zhash.go-P16 111 ±29% 109 ±30% ~ (p=0.356 n=16+16) dataset:prod1-1024 z6001/fs1/zhash.py 29.8 ± 1% 32.4 ± 0% +8.45% (p=0.008 n=5+5) z6001/fs1/zhash.py-P16 37.8 ±28% 33.1 ± 5% -12.44% (p=0.002 n=16+14) z6001/fs1/zhash.go 2.14 ± 3% 2.40 ± 0% +12.15% (p=0.016 n=5+4) z6001/fs1/zhash.go+prefetch128 4.20 ± 7% 4.88 ± 2% +16.19% (p=0.008 n=5+5) z6001/fs1/zhash.go-P16 2.77 ±24% 3.07 ±25% +10.59% (p=0.018 n=16+16) z6001/zeo/zhash.py 470 ± 5% 570 ±11% +21.07% (p=0.008 n=5+5) z6001/zeo/zhash.py-P16 1.51k ±19% 1.58k ±20% ~ (p=0.275 n=16+16) z6001/neo/py/sqlite/zhash.py 427 ± 3% 711 ± 6% +66.59% (p=0.008 n=5+5) z6001/neo/py/sqlite/zhash.py-P16 2.96k ± 1% 2.93k ± 0% -1.02% (p=0.000 n=16+16) z6001/neo/py/sqlite/zhash.go 244 ± 0% 351 ± 3% +43.97% (p=0.008 n=5+5) z6001/neo/py/sqlite/zhash.go+prefetch128 174 ± 2% 189 ± 3% +8.17% (p=0.008 n=5+5) z6001/neo/py/sqlite/zhash.go-P16 2.76k ± 2% 2.83k ± 3% +2.61% (p=0.009 n=16+16) z6001/neo/py/sql/zhash.py 477 ± 3% 894 ± 1% +87.27% (p=0.029 n=4+4) z6001/neo/py/sql/zhash.py-P16 3.95k ± 0% 4.89k ± 0% +23.96% (p=0.000 n=15+16) z6001/neo/py/sql/zhash.go 299 ± 1% 434 ± 4% +45.15% (p=0.008 n=5+5) z6001/neo/py/sql/zhash.go+prefetch128 230 ± 0% 303 ± 3% +31.65% (p=0.016 n=4+5) z6001/neo/py/sql/zhash.go-P16 3.62k ± 2% 4.66k ± 2% +28.58% (p=0.000 n=16+16) z6001/neo/go/zhash.py 272 ±10% 359 ±11% +32.21% (p=0.008 n=5+5) z6001/neo/go/zhash.py-P16 464 ± 1% 479 ± 2% +3.35% (p=0.000 n=15+16) z6001/neo/go/zhash.go 58.2 ± 2% 68.5 ± 3% +17.67% (p=0.008 n=5+5) z6001/neo/go/zhash.go+prefetch128 24.1 ± 0% 25.8 ± 2% +6.97% (p=0.008 n=5+5) z6001/neo/go/zhash.go-P16 141 ±48% 150 ±47% +6.31% (p=0.016 n=16+16) z6001/neo/go(!sha1)/zhash.go 48.6 ± 3% 56.0 ± 8% +15.36% (p=0.008 n=5+5) z6001/neo/go(!sha1)/zhash.go+prefetch128 22.1 ± 1% 22.8 ± 2% +3.26% (p=0.016 n=5+5) z6001/neo/go(!sha1)/zhash.go-P16 131 ±44% 136 ±43% +4.29% (p=0.009 n=16+16) -------- ; Full summary $ benchstat -split dataset 20171106-time-z6001-Cenabled.txt name pystone/s z6001/pystone 110k ± 1% name µs/op z6001/sha1/py/1024B 2.43 ± 2% z6001/sha1/go/1024B 2.51 ± 0% z6001/sha1/py/4096B 8.57 ± 0% z6001/sha1/go/4096B 10.0 ± 0% name us/op z6001/disk/randread/direct/4K-min 123 ± 1% z6001/disk/randread/direct/4K-avg 143 ±14% name time/op z6001/disk/randread/pagecache/4K-min 406ns ± 4% z6001/disk/randread/pagecache/4K-avg 980ns ± 1% name µs/object dataset:wczblk1-8 z6001/fs1/zhash.py 38.8 ± 1% z6001/fs1/zhash.py-P16 39.2 ± 2% z6001/fs1/zhash.go 4.08 ± 4% z6001/fs1/zhash.go+prefetch128 9.42 ±12% z6001/fs1/zhash.go-P16 4.89 ±27% z6001/zeo/zhash.py 612 ± 4% z6001/zeo/zhash.py-P16 1.86k ±14% z6001/neo/py/sqlite/zhash.py 793 ± 5% z6001/neo/py/sqlite/zhash.py-P16 3.20k ± 1% z6001/neo/py/sqlite/zhash.go 517 ± 5% z6001/neo/py/sqlite/zhash.go+prefetch128 211 ± 4% z6001/neo/py/sqlite/zhash.go-P16 2.93k ± 6% z6001/neo/py(!log)/sqlite/zhash.py 714 ±16% z6001/neo/py(!log)/sqlite/zhash.py-P16 2.81k ± 1% z6001/neo/py(!log)/sqlite/zhash.go 486 ± 5% z6001/neo/py(!log)/sqlite/zhash.go+prefetch128 179 ± 2% z6001/neo/py(!log)/sqlite/zhash.go-P16 2.53k ±10% z6001/neo/py/sql/zhash.py 1.14k ±25% z6001/neo/py/sql/zhash.py-P16 5.34k ± 0% z6001/neo/py/sql/zhash.go 668 ± 9% z6001/neo/py/sql/zhash.go+prefetch128 311 ± 5% z6001/neo/py/sql/zhash.go-P16 4.63k ± 2% z6001/neo/py(!log)/sql/zhash.py 1.03k ±36% z6001/neo/py(!log)/sql/zhash.py-P16 4.46k ± 1% z6001/neo/py(!log)/sql/zhash.go 624 ± 5% z6001/neo/py(!log)/sql/zhash.go+prefetch128 279 ± 4% z6001/neo/py(!log)/sql/zhash.go-P16 4.18k ± 4% z6001/neo/go/zhash.py 475 ±15% z6001/neo/go/zhash.py-P16 498 ± 2% z6001/neo/go/zhash.go 90.7 ± 8% z6001/neo/go/zhash.go+prefetch128 41.8 ± 8% z6001/neo/go/zhash.go-P16 128 ±21% z6001/neo/go(!sha1)/zhash.go 63.8 ±11% z6001/neo/go(!sha1)/zhash.go+prefetch128 30.0 ± 2% z6001/neo/go(!sha1)/zhash.go-P16 109 ±30% dataset:prod1-1024 z6001/fs1/zhash.py 32.4 ± 0% z6001/fs1/zhash.py-P16 33.1 ± 5% z6001/fs1/zhash.go 2.40 ± 0% z6001/fs1/zhash.go+prefetch128 4.88 ± 2% z6001/fs1/zhash.go-P16 3.07 ±25% z6001/zeo/zhash.py 570 ±11% z6001/zeo/zhash.py-P16 1.58k ±20% z6001/neo/py/sqlite/zhash.py 711 ± 6% z6001/neo/py/sqlite/zhash.py-P16 2.93k ± 0% z6001/neo/py/sqlite/zhash.go 351 ± 3% z6001/neo/py/sqlite/zhash.go+prefetch128 189 ± 3% z6001/neo/py/sqlite/zhash.go-P16 2.83k ± 3% z6001/neo/py(!log)/sqlite/zhash.py 698 ± 5% z6001/neo/py(!log)/sqlite/zhash.py-P16 2.75k ± 0% z6001/neo/py(!log)/sqlite/zhash.go 322 ± 3% z6001/neo/py(!log)/sqlite/zhash.go+prefetch128 166 ± 1% z6001/neo/py(!log)/sqlite/zhash.go-P16 2.52k ± 4% z6001/neo/py/sql/zhash.py 894 ± 1% z6001/neo/py/sql/zhash.py-P16 4.89k ± 0% z6001/neo/py/sql/zhash.go 434 ± 4% z6001/neo/py/sql/zhash.go+prefetch128 303 ± 3% z6001/neo/py/sql/zhash.go-P16 4.66k ± 2% z6001/neo/py(!log)/sql/zhash.py 982 ±30% z6001/neo/py(!log)/sql/zhash.py-P16 4.26k ± 1% z6001/neo/py(!log)/sql/zhash.go 405 ± 2% z6001/neo/py(!log)/sql/zhash.go+prefetch128 267 ±10% z6001/neo/py(!log)/sql/zhash.go-P16 4.72k ± 0% z6001/neo/go/zhash.py 359 ±11% z6001/neo/go/zhash.py-P16 479 ± 2% z6001/neo/go/zhash.go 68.5 ± 3% z6001/neo/go/zhash.go+prefetch128 25.8 ± 2% z6001/neo/go/zhash.go-P16 150 ±47% z6001/neo/go(!sha1)/zhash.go 56.0 ± 8% z6001/neo/go(!sha1)/zhash.go+prefetch128 22.8 ± 2% z6001/neo/go(!sha1)/zhash.go-P16 136 ±43%
-
Kirill Smelkov authored
See previous commit for context.
-
Kirill Smelkov authored
NEO/py currently logs every packet internally in RAM buffer to flush on SIGRTMIN if such a request comes. This might be adding overhead, so Julien asked to also have numbers for NEO/py server with logging disabled. We already don't enable logging on NEO/py client since to enable it a ?logfile query needs to be passed to ZODB URL and we don't do so anywhere.
-
Kirill Smelkov authored
-
- 05 Nov, 2017 1 commit
-
-
Kirill Smelkov authored
-
- 03 Nov, 2017 3 commits
-
-
Kirill Smelkov authored
bc is marked as optional in Debian and I just hit a machine where it is not installed. Python might be also not installed but we depend on python in many more places so using it instead of bc reduces the probability `neotest info-local` won't work on a fresh machine.
-
Kirill Smelkov authored
Similarly to 352cd100 (X neotest: Fix disk display in case of MD). before: # vg0-root: rev 74.5G after: # dm-0 (vg0-root) -> sdi2 # sdi: PERC H330 Mini rev 4.27 111.3G
-
Kirill Smelkov authored
Similary to f2932247 (X neotest/info-local: Don't crash if an egg could not be found) before: # Fri, 03 Nov 2017 10:47:37 +0300 # kirr@deco.navytux.spb.ru (2401:5180:0:37::1 192.168.0.2) # Linux deco 4.13.0-1-amd64 #1 SMP Debian 4.13.4-2 (2017-10-15) x86_64 GNU/Linux # cpu: Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz # cpu[0-3]: freq: intel_pstate/powersave [.40GHz - 3.40GHz] # cpu[0-3]: idle: intel_idle/menu: POLL(0μs) C1(2μs) C1E(10μs) C3(70μs) C6(85μs) C7s(124μs) C8(200μs) C9(480μs) C10(890μs) # cpu: WARNING: frequency not fixed - benchmark timings won't be stable # cpu: WARNING: C-state exit-latency is max 890μs - up to that can add to networked and IPC request-reply latency # sda: SanDisk X400 M.2 rev 0012 477G # wlan0: Intel Corporation Wireless 8260 rev 3a # wlan0: features: !rx !tx sg !tso !ufo gso gro !lro !rxvlan !txvlan !ntuple !rxhash ... # wlan0: coalesce: rxc: ?, txc: ? # wlan0: down, speed=?, mtu=1500, txqlen=1000, gro_flush_timeout=0.000µs # wlan0: WARNING: TSO not enabled - TCP latency with packets > MSS will be poor # eth0: Intel Corporation Ethernet Connection I219-LM rev 21 # eth0: features: rx tx sg tso !ufo gso gro !lro rxvlan txvlan !ntuple rxhash ... # eth0: coalesce: rxc: 3μs/0f/0μs-irq/0f-irq, txc: 0μs/0f/0μs-irq/0f-irq # eth0: up, speed=1000, mtu=1500, txqlen=1000, gro_flush_timeout=0.000µs # Python 2.7.14 # go version go1.9.2 linux/amd64 # sqlite 3.20.1 (py mod 2.6.0) # ./neotest: строка 733: mysqld: команда не найдена after: # Fri, 03 Nov 2017 10:44:12 +0300 # kirr@deco.navytux.spb.ru (2401:5180:0:37::1 192.168.0.2) # Linux deco 4.13.0-1-amd64 #1 SMP Debian 4.13.4-2 (2017-10-15) x86_64 GNU/Linux # cpu: Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz # cpu[0-3]: freq: intel_pstate/powersave [.40GHz - 3.40GHz] # cpu[0-3]: idle: intel_idle/menu: POLL(0μs) C1(2μs) C1E(10μs) C3(70μs) C6(85μs) C7s(124μs) C8(200μs) C9(480μs) C10(890μs) # cpu: WARNING: frequency not fixed - benchmark timings won't be stable # cpu: WARNING: C-state exit-latency is max 890μs - up to that can add to networked and IPC request-reply latency # sda: SanDisk X400 M.2 rev 0012 477G # wlan0: Intel Corporation Wireless 8260 rev 3a # wlan0: features: !rx !tx sg !tso !ufo gso gro !lro !rxvlan !txvlan !ntuple !rxhash ... # wlan0: coalesce: rxc: ?, txc: ? # wlan0: down, speed=?, mtu=1500, txqlen=1000, gro_flush_timeout=0.000µs # wlan0: WARNING: TSO not enabled - TCP latency with packets > MSS will be poor # eth0: Intel Corporation Ethernet Connection I219-LM rev 21 # eth0: features: rx tx sg tso !ufo gso gro !lro rxvlan txvlan !ntuple rxhash ... # eth0: coalesce: rxc: 3μs/0f/0μs-irq/0f-irq, txc: 0μs/0f/0μs-irq/0f-irq # eth0: up, speed=1000, mtu=1500, txqlen=1000, gro_flush_timeout=0.000µs # Python 2.7.14 # go version go1.9.2 linux/amd64 # sqlite 3.20.1 (py mod 2.6.0) # mysqld : ø # neo : ø # zodb : ø # zeo : ø # mysqlclient : ø # wendelin.core : ø
-
- 02 Nov, 2017 5 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
ZODB: based on 5.3.0-5-gcb928231e + y/rawext patch zodbtools: with not-yet-in-master format stabilization + rawext The main reson why generated files change a lot is because of this ZODB commit: https://github.com/zopefoundation/ZODB/commit/be5a9d54 where pickle protocol used to save data under python2 changed from 1 to 2.
-
Kirill Smelkov authored
before: # Thu, 02 Nov 2017 12:06:49 +0300 # kirr@deco.navytux.spb.ru (2401:5180:0:37::1 192.168.0.2) # Linux deco 4.13.0-1-amd64 #1 SMP Debian 4.13.4-2 (2017-10-15) x86_64 GNU/Linux # cpu: Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz # cpu[0-3]: freq: intel_pstate/powersave [.40GHz - 3.40GHz] # cpu[0-3]: idle: intel_idle/menu: POLL(0μs) C1(2μs) C1E(10μs) C3(70μs) C6(85μs) C7s(124μs) C8(200μs) C9(480μs) C10(890μs) # cpu: WARNING: frequency not fixed - benchmark timings won't be stable # cpu: WARNING: C-state exit-latency is max 890μs - up to that can add to networked and IPC request-reply latency # sda: SanDisk X400 M.2 rev 0012 477G # wlan0: Intel Corporation Wireless 8260 rev 3a # wlan0: features: !rx !tx sg !tso !ufo gso gro !lro !rxvlan !txvlan !ntuple !rxhash ... # wlan0: coalesce: rxc: ?, txc: ? # wlan0: down, speed=?, mtu=1500, txqlen=1000, gro_flush_timeout=0.000µs # wlan0: WARNING: TSO not enabled - TCP latency with packets > MSS will be poor # eth0: Intel Corporation Ethernet Connection I219-LM rev 21 # eth0: features: rx tx sg tso !ufo gso gro !lro rxvlan txvlan !ntuple rxhash ... # eth0: coalesce: rxc: 3μs/0f/0μs-irq/0f-irq, txc: 0μs/0f/0μs-irq/0f-irq # eth0: up, speed=1000, mtu=1500, txqlen=1000, gro_flush_timeout=0.000µs # Python 2.7.14 # go version go1.9.2 linux/amd64 # sqlite 3.20.1 (py mod 2.6.0) # mysqld Ver 10.1.26-MariaDB-1 for debian-linux-gnu on x86_64 (Debian unstable) Traceback (most recent call last): File "<string>", line 4, in <module> File "/home/kirr/src/wendelin/venv/z-dev/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 963, in require needed = self.resolve(parse_requirements(requirements)) File "/home/kirr/src/wendelin/venv/z-dev/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 849, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'neoppod' distribution was not found and is required by the application after: # Thu, 02 Nov 2017 12:09:01 +0300 # kirr@deco.navytux.spb.ru (2401:5180:0:37::1 192.168.0.2) # Linux deco 4.13.0-1-amd64 #1 SMP Debian 4.13.4-2 (2017-10-15) x86_64 GNU/Linux # cpu: Intel(R) Core(TM) i7-6600U CPU @ 2.60GHz # cpu[0-3]: freq: intel_pstate/powersave [.40GHz - 3.40GHz] # cpu[0-3]: idle: intel_idle/menu: POLL(0μs) C1(2μs) C1E(10μs) C3(70μs) C6(85μs) C7s(124μs) C8(200μs) C9(480μs) C10(890μs) # cpu: WARNING: frequency not fixed - benchmark timings won't be stable # cpu: WARNING: C-state exit-latency is max 890μs - up to that can add to networked and IPC request-reply latency # sda: SanDisk X400 M.2 rev 0012 477G # wlan0: Intel Corporation Wireless 8260 rev 3a # wlan0: features: !rx !tx sg !tso !ufo gso gro !lro !rxvlan !txvlan !ntuple !rxhash ... # wlan0: coalesce: rxc: ?, txc: ? # wlan0: down, speed=?, mtu=1500, txqlen=1000, gro_flush_timeout=0.000µs # wlan0: WARNING: TSO not enabled - TCP latency with packets > MSS will be poor # eth0: Intel Corporation Ethernet Connection I219-LM rev 21 # eth0: features: rx tx sg tso !ufo gso gro !lro rxvlan txvlan !ntuple rxhash ... # eth0: coalesce: rxc: 3μs/0f/0μs-irq/0f-irq, txc: 0μs/0f/0μs-irq/0f-irq # eth0: up, speed=1000, mtu=1500, txqlen=1000, gro_flush_timeout=0.000µs # Python 2.7.14 # go version go1.9.2 linux/amd64 # sqlite 3.20.1 (py mod 2.6.0) # mysqld Ver 10.1.26-MariaDB-1 for debian-linux-gnu on x86_64 (Debian unstable) # neo : ø # zodb : 5.3.0-6-g52d79afa3 # zeo : 5.1.0-11-gbd4aaf68 # mysqlclient : ø # wendelin.core : ø
-
- 31 Oct, 2017 2 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 30 Oct, 2017 3 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
See aeeaef89 (Update comment of RECOVERING state).
-
Kirill Smelkov authored
* origin/master: neomigrate: fix typo in a warning message neoctl: make cell padding consistent when displaying the partition table storage: fix possible crash when delaying replication requests qa: bug found in assignment of storage node ids, add test Update comment of RECOVERING state Add support for OpenSSL >= 1.1
-
- 27 Oct, 2017 6 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
There it was errgroup.Group adjusted with Gox for functions with exceptions, but exceptions usage should be constrained to tests only and it is easy to do with just Go(exc.Funcx(...)) explicitly or via local gox function as syntax sugar.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Julien Muchembled authored
-
Kirill Smelkov authored
-
- 25 Oct, 2017 4 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 24 Oct, 2017 2 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 19 Oct, 2017 6 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
Tests could be now run from any dir.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
it was e.g.: lsblk: /dev/md: not a block device lsblk: /dev/md: not a block device # md: rev now: # md1 (raid0) -> sda3 sdb3 # sda: SAMSUNG MZ7LN256 rev 100Q 238.5G # sdb: SAMSUNG MZ7LN256 rev 100Q 238.5G
-
Kirill Smelkov authored
The problem with `getent hosts ...` is that /etc/hosts has to be manually prepared for it to work and it just does not scale and is error prone. So just extract machine global IP addresses at runtime as configured on the interfaces.
-