- 05 Mar, 2018 5 commits
-
-
Kirill Smelkov authored
Latency is awful (> 500µs) because RX coalescing is not disabled on rio.
-
Kirill Smelkov authored
Like in previous commit should not be changing compared to 39a77e3b, but it was not checked.
-
Kirill Smelkov authored
Compared to 0ed7b1fc there should not be a difference (checked only on serial cases with manually ediging 20180221-deco-noturbo-noz.txt-fix to use updated output format for wczblk1-8 dataset): $ benchstat -split node,cluster,dataset 20180221-deco-noturbo-noz.txt-fix 20180305-deco-noturbo-c.txt name old pystone/s new pystone/s delta node:deco pystone 221k ± 0% 219k ± 3% ~ (p=0.690 n=5+5) name old time/op new time/op delta node:deco sha1/py/1K 1.85µs ± 2% 1.84µs ± 1% ~ (p=0.508 n=5+5) sha1/go/1K 1.53µs ± 0% 1.53µs ± 0% ~ (p=0.103 n=5+5) sha1/py/4K 6.68µs ± 1% 6.69µs ± 0% ~ (p=0.151 n=5+5) sha1/go/4K 5.59µs ± 0% 5.59µs ± 0% ~ (p=0.333 n=5+5) sha1/py/2M 3.18ms ± 0% 3.18ms ± 0% ~ (p=0.421 n=5+5) sha1/go/2M 2.78ms ± 0% 2.78ms ± 0% ~ (p=1.000 n=5+5) unzlib/py/null-1K 2.81µs ± 0% 2.91µs ± 3% +3.49% (p=0.008 n=5+5) unzlib/go/null-1K 2.86µs ± 0% 2.88µs ± 1% +0.93% (p=0.008 n=5+5) unzlib/py/null-4K 11.3µs ± 1% 11.4µs ± 2% +0.89% (p=0.040 n=5+5) unzlib/go/null-4K 11.3µs ± 0% 11.4µs ± 0% +0.89% (p=0.008 n=5+5) unzlib/py/null-2M 5.71ms ± 5% 5.66ms ± 3% ~ (p=1.000 n=5+5) unzlib/go/null-2M 4.79ms ± 0% 4.79ms ± 0% ~ (p=0.095 n=5+5) unzlib/py/prod1-avg 5.02µs ± 1% 5.06µs ± 0% ~ (p=0.057 n=4+4) unzlib/go/prod1-avg 5.27µs ± 1% 5.33µs ± 1% ~ (p=0.087 n=5+5) unzlib/py/prod1-max 512µs ± 2% 438µs ± 5% -14.45% (p=0.008 n=5+5) unzlib/go/prod1-max 336µs ± 0% 352µs ± 4% +4.67% (p=0.008 n=5+5) disk/randread/direct/4K-min 105µs ± 1% 105µs ± 0% ~ (p=1.000 n=5+5) disk/randread/direct/4K-avg 144µs ± 0% 142µs ± 0% -1.54% (p=0.008 n=5+5) disk/randread/direct/2M-min 5.48ms ± 3% 5.34ms ± 3% ~ (p=0.135 n=5+5) disk/randread/direct/2M-avg 6.13ms ± 1% 6.04ms ± 2% ~ (p=0.056 n=5+5) disk/randread/pagecache/4K-min 570ns ± 1% 583ns ± 1% +2.28% (p=0.008 n=5+5) disk/randread/pagecache/4K-avg 975ns ± 0% 1003ns ± 1% +2.91% (p=0.008 n=5+5) disk/randread/pagecache/2M-min 195µs ± 4% 197µs ± 4% ~ (p=0.690 n=5+5) disk/randread/pagecache/2M-avg 214µs ± 0% 220µs ± 1% +2.80% (p=0.008 n=5+5) name old time/object new time/object delta cluster:deco dataset:wczblk1-8 fs1-zhash.py 20.3µs ± 1% 20.2µs ± 1% ~ (p=0.444 n=5+4) fs1-zhash.go 3.20µs ± 0% 3.20µs ± 0% ~ (all equal) fs1-zhash.go+prefetch128 4.16µs ± 4% 4.14µs ± 6% ~ (p=0.952 n=5+5) zeo/py/fs1-zhash.py 375µs ± 4% 379µs ± 3% ~ (p=0.690 n=5+5) neo/py/sqlite-zhash.py 355µs ± 6% 362µs ± 6% ~ (p=0.421 n=5+5) neo/py/sqlite-zhash.go 156µs ± 2% 158µs ± 1% ~ (p=0.151 n=5+5) neo/py/sqlite-zhash.go+prefetch128 134µs ± 2% 135µs ± 1% ~ (p=0.421 n=5+5) neo/py(!log)/sqlite-zhash.py 326µs ± 3% 335µs ± 4% ~ (p=0.381 n=5+5) neo/py(!log)/sqlite-zhash.go 143µs ± 3% 145µs ± 2% ~ (p=0.508 n=5+5) neo/py(!log)/sqlite-zhash.go+prefetch128 119µs ± 2% 118µs ± 1% ~ (p=0.421 n=5+5) neo/py/sql-zhash.py 466µs ±45% 392µs ± 5% ~ (p=0.111 n=5+4) neo/py/sql-zhash.go 201µs ± 2% 197µs ± 1% -1.63% (p=0.008 n=5+5) neo/py/sql-zhash.go+prefetch128 184µs ± 2% 180µs ± 2% -1.96% (p=0.032 n=5+5) neo/py(!log)/sql-zhash.py 375µs ± 2% 454µs ±61% ~ (p=0.286 n=4+5) neo/py(!log)/sql-zhash.go 182µs ± 2% 183µs ± 1% ~ (p=0.802 n=5+5) neo/py(!log)/sql-zhash.go+prefetch128 164µs ± 1% 164µs ± 2% ~ (p=0.881 n=5+5) neo/go/fs1-zhash.py 226µs ± 1% 227µs ± 2% ~ (p=0.397 n=5+5) neo/go/fs1-zhash.go 56.8µs ± 1% 56.9µs ± 1% ~ (p=0.889 n=5+5) neo/go/fs1-zhash.go+prefetch128 24.8µs ± 3% 24.7µs ± 2% ~ (p=0.651 n=5+5) neo/go/sqlite-zhash.py 264µs ± 4% 269µs ± 1% ~ (p=0.548 n=5+5) neo/go/sqlite-zhash.go 93.5µs ± 0% 92.7µs ± 0% -0.83% (p=0.008 n=5+5) neo/go/sqlite-zhash.go+prefetch128 39.3µs ± 4% 39.8µs ± 8% ~ (p=0.952 n=5+5) ( not sure what it was for unzlib/py/prod1-max - probaby some background process was also running last time at that test )
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 04 Mar, 2018 9 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 02 Mar, 2018 6 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 01 Mar, 2018 2 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 28 Feb, 2018 9 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
This should make inter-host or inter-cluster comparisions of benchmark more straightforward.
-
Kirill Smelkov authored
Cstates are not disabled. First dataset graphed here: https://lab.nexedi.com/kirr/misc/raw/526e5ca/t/zwrk-z600-htoff.png
-
Kirill Smelkov authored
Cstates are not disabled. First dataset grpahed here: https://lab.nexedi.com/kirr/misc/raw/4a8fedc/t/zwrk-z6001.png
-
Kirill Smelkov authored
Handy for misconfigured systems like fqdn=z6001.ivan.nexedi.com, hostname=z6001-COMP-2784.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 27 Feb, 2018 2 commits
-
-
Kirill Smelkov authored
very draft; work in progress.
-
Kirill Smelkov authored
-
- 26 Feb, 2018 5 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
Similarly to wrk on HTTP. Rationale: simulating multiple clients is: 1. noisy - the timings from run to run are changing sometimes up to 50% 2. with significant additional overhead - there are constant OS-level process switches in between client processes and this prevents to actually create the load. 3. the above load from "2" actually takes resources from the server in localhost case. So let's switch to simlating many requests in lightweight way similarly to how it is done in wrk - in one process and not so many threads (it can be just 1) with many connections opened to server and epolly way to load it with Go providing epoll-goroutine matching.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
They are the same in size.
-
Kirill Smelkov authored
`proto.PktHeaderLen + packed.Ntoh32(pkth.MsgLen)` could wrap int32 if sent pkth.MsgLen is big.
-
- 23 Feb, 2018 1 commit
-
-
Kirill Smelkov authored
For the reference on deco (performance, frequence not fixed): name time/object deco/fs1/zhash.py 15.8µs ± 2% deco/fs1/zhash.py-P16 116µs ±12% deco/fs1/zhash.go 2.60µs ± 0% deco/fs1/zhash.go+prefetch128 3.70µs ±11% deco/fs1/zhash.go-P16 13.4µs ±43% deco/zeo/zhash.py 316µs ± 7% deco/zeo/zhash.py-P16 2.68ms ± 7% deco/zeo/zhash.go 111µs ± 2% deco/zeo/zhash.go+prefetch128 57.7µs ± 2% deco/zeo/zhash.go-P16 1.23ms ± 5%
-
- 22 Feb, 2018 1 commit
-
-
Kirill Smelkov authored
Compared to 55a64368 (rio localhost; 2017-11-06) the change is in kernel: -# Linux rio 4.13.0-1-amd64 #1 SMP Debian 4.13.4-2 (2017-10-15) x86_64 GNU/Linux +# Linux rio 4.15.0-1-amd64 #1 SMP Debian 4.15.4-1 (2018-02-18) x86_64 GNU/Linux and in particular 4.15 comes with PTI. summary: - direct disk 4K read is ~ 93µs as it was - pagecache is now slower due to PTI: the timing changed 0.6µs -> 0.8µs - fs1.py access is ~2% slower, probably due to PTI - fs1.go access is much slower due to client cache being always bolted on: 2.4µs -> 3.8µs (serial). The time for both serial and parallel cases is still small. - zeo is ~2% slower, probably due to PTI. - neo/py/sqlite vs zhash.py is 8% faster - because we removed data compression from the benchmark (b9827725), however P16 time is not improved. - neo/py/sqlite vs zhash.go is 40% faster - because we removed data compression from the benchmark and previously neo/go code was very inefficient at zlib decompression (924831e7, 9acb87a8, 57524491). However P16 time is improved only by 2-5%. - neo/py/sql vs zhash.{py,go} - similar changes to neo/py/sqlite - neo/go/fs1 vs zhash.py - unchanged; vs zhash.go - slower by 12% - probably due to PTI. zhash.go P16 case is improved by ~35% but it could be all noise. - newly introduces neo/go/sqlite vs zhash.go is at ~ 80-88µs for serial and ~ 230-300µs for P16 latencies (neo/py/sqlite for the same vs zhash.go is at ~ 120-130µs for serial and ~1600µs for P16 latencies). Clearly neo/go/sqlite needs more work. Full details: $ benchstat -split dataset 20180222-rio.txt |grep sqlite |grep 'neo.go' rio/neo/go/sqlite/zhash.py 230µs ± 2% rio/neo/go/sqlite/zhash.py-P16 937µs ± 5% rio/neo/go/sqlite/zhash.go 88.4µs ± 2% rio/neo/go/sqlite/zhash.go+prefetch128 33.6µs ± 7% rio/neo/go/sqlite/zhash.go-P16 228µs ±39% rio/neo/go/sqlite(!sha1)/zhash.go 79.7µs ± 3% rio/neo/go/sqlite(!sha1)/zhash.go+prefetch128 32.2µs ± 8% rio/neo/go/sqlite(!sha1)/zhash.go-P16 213µs ±42% rio/neo/go/sqlite/zhash.py 232µs ± 2% rio/neo/go/sqlite/zhash.py-P16 1.00ms ± 3% rio/neo/go/sqlite/zhash.go 79.2µs ± 2% rio/neo/go/sqlite/zhash.go+prefetch128 29.8µs ± 8% rio/neo/go/sqlite/zhash.go-P16 302µs ±40% rio/neo/go/sqlite(!sha1)/zhash.go 78.3µs ± 2% rio/neo/go/sqlite(!sha1)/zhash.go+prefetch128 31.1µs ± 2% rio/neo/go/sqlite(!sha1)/zhash.go-P16 311µs ±40% $ benchstat -split dataset 20171106-time-rio-Cenabled.txt 20180222-rio.txt name old pystone/s new pystone/s delta rio/pystone 178k ± 2% 180k ± 3% ~ (p=0.421 n=5+5) name old time/op new time/op delta rio/disk/randread/direct/4K-min 34.0µs ± 1% 34.7µs ± 1% +2.06% (p=0.008 n=5+5) rio/disk/randread/direct/4K-avg 92.9µs ± 0% 93.2µs ± 0% +0.37% (p=0.024 n=5+5) rio/disk/randread/pagecache/4K-min 221ns ± 0% 441ns ± 0% +99.37% (p=0.016 n=4+5) rio/disk/randread/pagecache/4K-avg 637ns ± 0% 872ns ± 1% +37.01% (p=0.016 n=4+5) name old time/object new time/object delta dataset:wczblk1-8 rio/fs1/zhash.py 22.3µs ± 2% 22.8µs ± 2% +2.24% (p=0.048 n=5+5) rio/fs1/zhash.py-P16 51.7µs ±72% 60.3µs ±49% ~ (p=0.127 n=15+16) rio/fs1/zhash.go 2.40µs ± 0% 3.80µs ± 0% +58.33% (p=0.016 n=5+4) rio/fs1/zhash.go+prefetch128 4.34µs ± 8% 4.10µs ± 2% ~ (p=0.246 n=5+5) rio/fs1/zhash.go-P16 3.58µs ±24% 6.79µs ±19% +90.01% (p=0.000 n=16+14) rio/zeo/zhash.py 336µs ± 2% 342µs ± 2% +1.82% (p=0.040 n=5+5) rio/zeo/zhash.py-P16 1.61ms ±19% 1.64ms ±20% ~ (p=0.423 n=16+16) rio/neo/py/sqlite/zhash.py 304µs ± 6% 279µs ± 3% -8.11% (p=0.016 n=5+5) rio/neo/py/sqlite/zhash.py-P16 2.19ms ± 0% 2.28ms ± 1% +4.16% (p=0.000 n=13+15) rio/neo/py/sqlite/zhash.go 248µs ± 1% 147µs ± 2% -40.70% (p=0.008 n=5+5) rio/neo/py/sqlite/zhash.go+prefetch128 125µs ± 1% 125µs ± 2% ~ (p=0.651 n=5+5) rio/neo/py/sqlite/zhash.go-P16 1.76ms ±13% 1.73ms ±14% -1.72% (p=0.002 n=16+16) rio/neo/py(!log)/sqlite/zhash.py 291µs ± 2% 267µs ± 1% -8.20% (p=0.008 n=5+5) rio/neo/py(!log)/sqlite/zhash.py-P16 2.01ms ± 2% 2.01ms ± 1% ~ (p=0.545 n=16+15) rio/neo/py(!log)/sqlite/zhash.go 231µs ± 1% 130µs ± 2% -43.54% (p=0.008 n=5+5) rio/neo/py(!log)/sqlite/zhash.go+prefetch128 110µs ± 2% 112µs ± 4% ~ (p=0.222 n=5+5) rio/neo/py(!log)/sqlite/zhash.go-P16 1.62ms ± 7% 1.62ms ± 0% +0.18% (p=0.029 n=16+11) rio/neo/py/sql/zhash.py 325µs ± 4% 294µs ± 1% -9.50% (p=0.029 n=4+4) rio/neo/py/sql/zhash.py-P16 2.88ms ± 1% 2.85ms ± 1% -0.94% (p=0.000 n=15+15) rio/neo/py/sql/zhash.go 275µs ± 2% 169µs ± 3% -38.39% (p=0.008 n=5+5) rio/neo/py/sql/zhash.go+prefetch128 154µs ± 3% 154µs ± 4% ~ (p=1.000 n=5+5) rio/neo/py/sql/zhash.go-P16 2.30ms ± 8% 2.19ms ±11% -5.03% (p=0.009 n=16+16) rio/neo/py(!log)/sql/zhash.py 313µs ± 4% 285µs ± 5% -8.95% (p=0.029 n=4+4) rio/neo/py(!log)/sql/zhash.py-P16 2.56ms ± 1% 2.47ms ± 1% -3.45% (p=0.000 n=15+14) rio/neo/py(!log)/sql/zhash.go 258µs ± 1% 152µs ± 1% -41.05% (p=0.016 n=5+4) rio/neo/py(!log)/sql/zhash.go+prefetch128 139µs ± 1% 137µs ± 3% ~ (p=0.151 n=5+5) rio/neo/py(!log)/sql/zhash.go-P16 2.21ms ± 5% 1.98ms ± 0% -10.19% (p=0.000 n=16+11) rio/neo/go/fs1/zhash.py 190µs ± 3% 190µs ± 4% ~ (p=1.000 n=5+5) rio/neo/go/fs1/zhash.py-P16 784µs ± 9% 755µs ± 7% -3.69% (p=0.018 n=16+16) rio/neo/go/fs1/zhash.go 52.0µs ± 1% 58.0µs ± 3% +11.55% (p=0.008 n=5+5) rio/neo/go/fs1/zhash.go+prefetch128 26.6µs ± 5% 24.9µs ± 4% ~ (p=0.087 n=5+5) rio/neo/go/fs1/zhash.go-P16 256µs ± 6% 164µs ±50% -36.13% (p=0.000 n=15+16) rio/neo/go/fs1(!sha1)/zhash.go 35.3µs ± 4% 37.3µs ± 2% +5.73% (p=0.008 n=5+5) rio/neo/go/fs1(!sha1)/zhash.go+prefetch128 17.3µs ± 2% 16.3µs ± 2% -6.00% (p=0.008 n=5+5) rio/neo/go/fs1(!sha1)/zhash.go-P16 152µs ±13% 118µs ±51% -22.57% (p=0.000 n=13+16) dataset:prod1-1024 rio/fs1/zhash.py 18.9µs ± 1% 18.9µs ± 2% ~ (p=0.825 n=5+5) rio/fs1/zhash.py-P16 58.0µs ±52% 56.2µs ±32% ~ (p=0.926 n=16+16) rio/fs1/zhash.go 1.30µs ± 0% 2.80µs ± 0% +115.38% (p=0.016 n=4+5) rio/fs1/zhash.go+prefetch128 2.78µs ±14% 2.70µs ± 0% ~ (p=0.413 n=5+4) rio/fs1/zhash.go-P16 2.24µs ±16% 6.59µs ±43% +194.93% (p=0.000 n=14+16) rio/zeo/zhash.py 302µs ± 7% 294µs ± 6% ~ (p=0.310 n=5+5) rio/zeo/zhash.py-P16 1.44ms ±11% 1.46ms ± 3% ~ (p=0.892 n=15+13) rio/neo/py/sqlite/zhash.py 269µs ± 1% 275µs ± 1% +2.51% (p=0.008 n=5+5) rio/neo/py/sqlite/zhash.py-P16 2.19ms ± 0% 2.34ms ± 0% +6.70% (p=0.000 n=16+13) rio/neo/py/sqlite/zhash.go 158µs ± 1% 140µs ± 3% -11.68% (p=0.008 n=5+5) rio/neo/py/sqlite/zhash.go+prefetch128 116µs ± 3% 118µs ± 1% ~ (p=0.111 n=5+5) rio/neo/py/sqlite/zhash.go-P16 1.90ms ± 0% 1.81ms ± 7% -4.58% (p=0.000 n=14+16) rio/neo/py(!log)/sqlite/zhash.py 259µs ± 4% 262µs ± 3% ~ (p=0.690 n=5+5) rio/neo/py(!log)/sqlite/zhash.py-P16 1.89ms ± 1% 2.04ms ± 0% +7.53% (p=0.000 n=15+15) rio/neo/py(!log)/sqlite/zhash.go 142µs ± 1% 123µs ± 2% -13.14% (p=0.008 n=5+5) rio/neo/py(!log)/sqlite/zhash.go+prefetch128 101µs ± 2% 103µs ± 1% +2.37% (p=0.008 n=5+5) rio/neo/py(!log)/sqlite/zhash.go-P16 1.57ms ± 0% 1.61ms ± 5% ~ (p=0.089 n=13+16) rio/neo/py/sql/zhash.py 337µs ±43% 298µs ± 3% ~ (p=0.556 n=5+4) rio/neo/py/sql/zhash.py-P16 2.73ms ± 0% 2.86ms ± 0% +4.74% (p=0.000 n=15+15) rio/neo/py/sql/zhash.go 186µs ± 3% 164µs ± 4% -11.92% (p=0.008 n=5+5) rio/neo/py/sql/zhash.go+prefetch128 145µs ± 2% 147µs ± 2% ~ (p=0.095 n=5+5) rio/neo/py/sql/zhash.go-P16 2.29ms ± 6% 2.27ms ± 6% -1.19% (p=0.002 n=16+16) rio/neo/py(!log)/sql/zhash.py 293µs ± 4% 287µs ± 2% ~ (p=0.343 n=4+4) rio/neo/py(!log)/sql/zhash.py-P16 2.47ms ± 0% 2.51ms ± 1% +1.61% (p=0.000 n=15+15) rio/neo/py(!log)/sql/zhash.go 168µs ± 1% 146µs ± 2% -13.49% (p=0.008 n=5+5) rio/neo/py(!log)/sql/zhash.go+prefetch128 130µs ± 2% 127µs ± 2% ~ (p=0.056 n=5+5) rio/neo/py(!log)/sql/zhash.go-P16 2.08ms ± 3% 2.00ms ± 4% -3.98% (p=0.009 n=16+16) rio/neo/go/fs1/zhash.py 181µs ± 5% 189µs ± 9% ~ (p=0.079 n=5+5) rio/neo/go/fs1/zhash.py-P16 714µs ± 6% 711µs ± 3% ~ (p=0.861 n=15+16) rio/neo/go/fs1/zhash.go 36.9µs ± 3% 41.8µs ± 2% +13.21% (p=0.008 n=5+5) rio/neo/go/fs1/zhash.go+prefetch128 16.5µs ± 1% 16.8µs ± 3% ~ (p=0.103 n=5+5) rio/neo/go/fs1/zhash.go-P16 239µs ± 4% 195µs ± 4% -18.37% (p=0.000 n=15+13) rio/neo/go/fs1(!sha1)/zhash.go 32.7µs ± 7% 36.3µs ± 2% +11.07% (p=0.008 n=5+5) rio/neo/go/fs1(!sha1)/zhash.go+prefetch128 13.5µs ± 1% 13.9µs ± 4% ~ (p=0.056 n=5+5) rio/neo/go/fs1(!sha1)/zhash.go-P16 190µs ± 7% 138µs ±60% -27.42% (p=0.000 n=14+16) name old pystone/s new pystone/s delta rio/pystone 178k ± 2% 180k ± 3% ~ (p=0.421 n=5+5) name old time/op new time/op delta rio/disk/randread/direct/4K-min 34.0µs ± 1% 34.7µs ± 1% +2.06% (p=0.008 n=5+5) rio/disk/randread/direct/4K-avg 92.9µs ± 0% 93.2µs ± 0% +0.37% (p=0.024 n=5+5) rio/disk/randread/pagecache/4K-min 221ns ± 0% 441ns ± 0% +99.37% (p=0.016 n=4+5) rio/disk/randread/pagecache/4K-avg 637ns ± 0% 872ns ± 1% +37.01% (p=0.016 n=4+5) name old time/object new time/object delta dataset:wczblk1-8 rio/fs1/zhash.py 22.3µs ± 2% 22.8µs ± 2% +2.24% (p=0.048 n=5+5) rio/fs1/zhash.py-P16 51.7µs ±72% 60.3µs ±49% ~ (p=0.127 n=15+16) rio/fs1/zhash.go 2.40µs ± 0% 3.80µs ± 0% +58.33% (p=0.016 n=5+4) rio/fs1/zhash.go+prefetch128 4.34µs ± 8% 4.10µs ± 2% ~ (p=0.246 n=5+5) rio/fs1/zhash.go-P16 3.58µs ±24% 6.79µs ±19% +90.01% (p=0.000 n=16+14) rio/zeo/zhash.py 336µs ± 2% 342µs ± 2% +1.82% (p=0.040 n=5+5) rio/zeo/zhash.py-P16 1.61ms ±19% 1.64ms ±20% ~ (p=0.423 n=16+16) rio/neo/py/sqlite/zhash.py 304µs ± 6% 279µs ± 3% -8.11% (p=0.016 n=5+5) rio/neo/py/sqlite/zhash.py-P16 2.19ms ± 0% 2.28ms ± 1% +4.16% (p=0.000 n=13+15) rio/neo/py/sqlite/zhash.go 248µs ± 1% 147µs ± 2% -40.70% (p=0.008 n=5+5) rio/neo/py/sqlite/zhash.go+prefetch128 125µs ± 1% 125µs ± 2% ~ (p=0.651 n=5+5) rio/neo/py/sqlite/zhash.go-P16 1.76ms ±13% 1.73ms ±14% -1.72% (p=0.002 n=16+16) rio/neo/py(!log)/sqlite/zhash.py 291µs ± 2% 267µs ± 1% -8.20% (p=0.008 n=5+5) rio/neo/py(!log)/sqlite/zhash.py-P16 2.01ms ± 2% 2.01ms ± 1% ~ (p=0.545 n=16+15) rio/neo/py(!log)/sqlite/zhash.go 231µs ± 1% 130µs ± 2% -43.54% (p=0.008 n=5+5) rio/neo/py(!log)/sqlite/zhash.go+prefetch128 110µs ± 2% 112µs ± 4% ~ (p=0.222 n=5+5) rio/neo/py(!log)/sqlite/zhash.go-P16 1.62ms ± 7% 1.62ms ± 0% +0.18% (p=0.029 n=16+11) rio/neo/py/sql/zhash.py 325µs ± 4% 294µs ± 1% -9.50% (p=0.029 n=4+4) rio/neo/py/sql/zhash.py-P16 2.88ms ± 1% 2.85ms ± 1% -0.94% (p=0.000 n=15+15) rio/neo/py/sql/zhash.go 275µs ± 2% 169µs ± 3% -38.39% (p=0.008 n=5+5) rio/neo/py/sql/zhash.go+prefetch128 154µs ± 3% 154µs ± 4% ~ (p=1.000 n=5+5) rio/neo/py/sql/zhash.go-P16 2.30ms ± 8% 2.19ms ±11% -5.03% (p=0.009 n=16+16) rio/neo/py(!log)/sql/zhash.py 313µs ± 4% 285µs ± 5% -8.95% (p=0.029 n=4+4) rio/neo/py(!log)/sql/zhash.py-P16 2.56ms ± 1% 2.47ms ± 1% -3.45% (p=0.000 n=15+14) rio/neo/py(!log)/sql/zhash.go 258µs ± 1% 152µs ± 1% -41.05% (p=0.016 n=5+4) rio/neo/py(!log)/sql/zhash.go+prefetch128 139µs ± 1% 137µs ± 3% ~ (p=0.151 n=5+5) rio/neo/py(!log)/sql/zhash.go-P16 2.21ms ± 5% 1.98ms ± 0% -10.19% (p=0.000 n=16+11) dataset:prod1-1024 rio/fs1/zhash.py 18.9µs ± 1% 18.9µs ± 2% ~ (p=0.825 n=5+5) rio/fs1/zhash.py-P16 58.0µs ±52% 56.2µs ±32% ~ (p=0.926 n=16+16) rio/fs1/zhash.go 1.30µs ± 0% 2.80µs ± 0% +115.38% (p=0.016 n=4+5) rio/fs1/zhash.go+prefetch128 2.78µs ±14% 2.70µs ± 0% ~ (p=0.413 n=5+4) rio/fs1/zhash.go-P16 2.24µs ±16% 6.59µs ±43% +194.93% (p=0.000 n=14+16) rio/zeo/zhash.py 302µs ± 7% 294µs ± 6% ~ (p=0.310 n=5+5) rio/zeo/zhash.py-P16 1.44ms ±11% 1.46ms ± 3% ~ (p=0.892 n=15+13) rio/neo/py/sqlite/zhash.py 269µs ± 1% 275µs ± 1% +2.51% (p=0.008 n=5+5) rio/neo/py/sqlite/zhash.py-P16 2.19ms ± 0% 2.34ms ± 0% +6.70% (p=0.000 n=16+13) rio/neo/py/sqlite/zhash.go 158µs ± 1% 140µs ± 3% -11.68% (p=0.008 n=5+5) rio/neo/py/sqlite/zhash.go+prefetch128 116µs ± 3% 118µs ± 1% ~ (p=0.111 n=5+5) rio/neo/py/sqlite/zhash.go-P16 1.90ms ± 0% 1.81ms ± 7% -4.58% (p=0.000 n=14+16) rio/neo/py(!log)/sqlite/zhash.py 259µs ± 4% 262µs ± 3% ~ (p=0.690 n=5+5) rio/neo/py(!log)/sqlite/zhash.py-P16 1.89ms ± 1% 2.04ms ± 0% +7.53% (p=0.000 n=15+15) rio/neo/py(!log)/sqlite/zhash.go 142µs ± 1% 123µs ± 2% -13.14% (p=0.008 n=5+5) rio/neo/py(!log)/sqlite/zhash.go+prefetch128 101µs ± 2% 103µs ± 1% +2.37% (p=0.008 n=5+5) rio/neo/py(!log)/sqlite/zhash.go-P16 1.57ms ± 0% 1.61ms ± 5% ~ (p=0.089 n=13+16) rio/neo/py/sql/zhash.py 337µs ±43% 298µs ± 3% ~ (p=0.556 n=5+4) rio/neo/py/sql/zhash.py-P16 2.73ms ± 0% 2.86ms ± 0% +4.74% (p=0.000 n=15+15) rio/neo/py/sql/zhash.go 186µs ± 3% 164µs ± 4% -11.92% (p=0.008 n=5+5) rio/neo/py/sql/zhash.go+prefetch128 145µs ± 2% 147µs ± 2% ~ (p=0.095 n=5+5) rio/neo/py/sql/zhash.go-P16 2.29ms ± 6% 2.27ms ± 6% -1.19% (p=0.002 n=16+16) rio/neo/py(!log)/sql/zhash.py 293µs ± 4% 287µs ± 2% ~ (p=0.343 n=4+4) rio/neo/py(!log)/sql/zhash.py-P16 2.47ms ± 0% 2.51ms ± 1% +1.61% (p=0.000 n=15+15) rio/neo/py(!log)/sql/zhash.go 168µs ± 1% 146µs ± 2% -13.49% (p=0.008 n=5+5) rio/neo/py(!log)/sql/zhash.go+prefetch128 130µs ± 2% 127µs ± 2% ~ (p=0.056 n=5+5) rio/neo/py(!log)/sql/zhash.go-P16 2.08ms ± 3% 2.00ms ± 4% -3.98% (p=0.009 n=16+16)
-