- 06 Mar, 2018 1 commit
-
-
Kirill Smelkov authored
-
- 05 Mar, 2018 6 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
Latency is awful (> 500µs) because RX coalescing is not disabled on rio.
-
Kirill Smelkov authored
Like in previous commit should not be changing compared to 39a77e3b, but it was not checked.
-
Kirill Smelkov authored
Compared to 0ed7b1fc there should not be a difference (checked only on serial cases with manually ediging 20180221-deco-noturbo-noz.txt-fix to use updated output format for wczblk1-8 dataset): $ benchstat -split node,cluster,dataset 20180221-deco-noturbo-noz.txt-fix 20180305-deco-noturbo-c.txt name old pystone/s new pystone/s delta node:deco pystone 221k ± 0% 219k ± 3% ~ (p=0.690 n=5+5) name old time/op new time/op delta node:deco sha1/py/1K 1.85µs ± 2% 1.84µs ± 1% ~ (p=0.508 n=5+5) sha1/go/1K 1.53µs ± 0% 1.53µs ± 0% ~ (p=0.103 n=5+5) sha1/py/4K 6.68µs ± 1% 6.69µs ± 0% ~ (p=0.151 n=5+5) sha1/go/4K 5.59µs ± 0% 5.59µs ± 0% ~ (p=0.333 n=5+5) sha1/py/2M 3.18ms ± 0% 3.18ms ± 0% ~ (p=0.421 n=5+5) sha1/go/2M 2.78ms ± 0% 2.78ms ± 0% ~ (p=1.000 n=5+5) unzlib/py/null-1K 2.81µs ± 0% 2.91µs ± 3% +3.49% (p=0.008 n=5+5) unzlib/go/null-1K 2.86µs ± 0% 2.88µs ± 1% +0.93% (p=0.008 n=5+5) unzlib/py/null-4K 11.3µs ± 1% 11.4µs ± 2% +0.89% (p=0.040 n=5+5) unzlib/go/null-4K 11.3µs ± 0% 11.4µs ± 0% +0.89% (p=0.008 n=5+5) unzlib/py/null-2M 5.71ms ± 5% 5.66ms ± 3% ~ (p=1.000 n=5+5) unzlib/go/null-2M 4.79ms ± 0% 4.79ms ± 0% ~ (p=0.095 n=5+5) unzlib/py/prod1-avg 5.02µs ± 1% 5.06µs ± 0% ~ (p=0.057 n=4+4) unzlib/go/prod1-avg 5.27µs ± 1% 5.33µs ± 1% ~ (p=0.087 n=5+5) unzlib/py/prod1-max 512µs ± 2% 438µs ± 5% -14.45% (p=0.008 n=5+5) unzlib/go/prod1-max 336µs ± 0% 352µs ± 4% +4.67% (p=0.008 n=5+5) disk/randread/direct/4K-min 105µs ± 1% 105µs ± 0% ~ (p=1.000 n=5+5) disk/randread/direct/4K-avg 144µs ± 0% 142µs ± 0% -1.54% (p=0.008 n=5+5) disk/randread/direct/2M-min 5.48ms ± 3% 5.34ms ± 3% ~ (p=0.135 n=5+5) disk/randread/direct/2M-avg 6.13ms ± 1% 6.04ms ± 2% ~ (p=0.056 n=5+5) disk/randread/pagecache/4K-min 570ns ± 1% 583ns ± 1% +2.28% (p=0.008 n=5+5) disk/randread/pagecache/4K-avg 975ns ± 0% 1003ns ± 1% +2.91% (p=0.008 n=5+5) disk/randread/pagecache/2M-min 195µs ± 4% 197µs ± 4% ~ (p=0.690 n=5+5) disk/randread/pagecache/2M-avg 214µs ± 0% 220µs ± 1% +2.80% (p=0.008 n=5+5) name old time/object new time/object delta cluster:deco dataset:wczblk1-8 fs1-zhash.py 20.3µs ± 1% 20.2µs ± 1% ~ (p=0.444 n=5+4) fs1-zhash.go 3.20µs ± 0% 3.20µs ± 0% ~ (all equal) fs1-zhash.go+prefetch128 4.16µs ± 4% 4.14µs ± 6% ~ (p=0.952 n=5+5) zeo/py/fs1-zhash.py 375µs ± 4% 379µs ± 3% ~ (p=0.690 n=5+5) neo/py/sqlite-zhash.py 355µs ± 6% 362µs ± 6% ~ (p=0.421 n=5+5) neo/py/sqlite-zhash.go 156µs ± 2% 158µs ± 1% ~ (p=0.151 n=5+5) neo/py/sqlite-zhash.go+prefetch128 134µs ± 2% 135µs ± 1% ~ (p=0.421 n=5+5) neo/py(!log)/sqlite-zhash.py 326µs ± 3% 335µs ± 4% ~ (p=0.381 n=5+5) neo/py(!log)/sqlite-zhash.go 143µs ± 3% 145µs ± 2% ~ (p=0.508 n=5+5) neo/py(!log)/sqlite-zhash.go+prefetch128 119µs ± 2% 118µs ± 1% ~ (p=0.421 n=5+5) neo/py/sql-zhash.py 466µs ±45% 392µs ± 5% ~ (p=0.111 n=5+4) neo/py/sql-zhash.go 201µs ± 2% 197µs ± 1% -1.63% (p=0.008 n=5+5) neo/py/sql-zhash.go+prefetch128 184µs ± 2% 180µs ± 2% -1.96% (p=0.032 n=5+5) neo/py(!log)/sql-zhash.py 375µs ± 2% 454µs ±61% ~ (p=0.286 n=4+5) neo/py(!log)/sql-zhash.go 182µs ± 2% 183µs ± 1% ~ (p=0.802 n=5+5) neo/py(!log)/sql-zhash.go+prefetch128 164µs ± 1% 164µs ± 2% ~ (p=0.881 n=5+5) neo/go/fs1-zhash.py 226µs ± 1% 227µs ± 2% ~ (p=0.397 n=5+5) neo/go/fs1-zhash.go 56.8µs ± 1% 56.9µs ± 1% ~ (p=0.889 n=5+5) neo/go/fs1-zhash.go+prefetch128 24.8µs ± 3% 24.7µs ± 2% ~ (p=0.651 n=5+5) neo/go/sqlite-zhash.py 264µs ± 4% 269µs ± 1% ~ (p=0.548 n=5+5) neo/go/sqlite-zhash.go 93.5µs ± 0% 92.7µs ± 0% -0.83% (p=0.008 n=5+5) neo/go/sqlite-zhash.go+prefetch128 39.3µs ± 4% 39.8µs ± 8% ~ (p=0.952 n=5+5) ( not sure what it was for unzlib/py/prod1-max - probaby some background process was also running last time at that test )
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 04 Mar, 2018 9 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 02 Mar, 2018 6 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 01 Mar, 2018 2 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 28 Feb, 2018 9 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
This should make inter-host or inter-cluster comparisions of benchmark more straightforward.
-
Kirill Smelkov authored
Cstates are not disabled. First dataset graphed here: https://lab.nexedi.com/kirr/misc/raw/526e5ca/t/zwrk-z600-htoff.png
-
Kirill Smelkov authored
Cstates are not disabled. First dataset grpahed here: https://lab.nexedi.com/kirr/misc/raw/4a8fedc/t/zwrk-z6001.png
-
Kirill Smelkov authored
Handy for misconfigured systems like fqdn=z6001.ivan.nexedi.com, hostname=z6001-COMP-2784.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 27 Feb, 2018 2 commits
-
-
Kirill Smelkov authored
very draft; work in progress.
-
Kirill Smelkov authored
-
- 26 Feb, 2018 5 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
Similarly to wrk on HTTP. Rationale: simulating multiple clients is: 1. noisy - the timings from run to run are changing sometimes up to 50% 2. with significant additional overhead - there are constant OS-level process switches in between client processes and this prevents to actually create the load. 3. the above load from "2" actually takes resources from the server in localhost case. So let's switch to simlating many requests in lightweight way similarly to how it is done in wrk - in one process and not so many threads (it can be just 1) with many connections opened to server and epolly way to load it with Go providing epoll-goroutine matching.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
They are the same in size.
-
Kirill Smelkov authored
`proto.PktHeaderLen + packed.Ntoh32(pkth.MsgLen)` could wrap int32 if sent pkth.MsgLen is big.
-