- 05 Oct, 2017 18 commits
-
-
Kirill Smelkov authored
-
Test authored
Same as on oct4 - we just add C-states profiling. In particular TCP1472 and TCP4096 are awful. C-states are not yet disabled.
-
Test authored
Same as oct04 - we just add C-states profiling. C-states are not yet disabled.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
Compared to just C-states disabled it improves: ping56 ~76μs -> ~40μs ping1472 ~150μs -> ~120μs TCP1 ~90μs -> ~50μs TCP1472 ~180μs -> ~145μs TCP4096 ~220-230μs -> ~175μs ZEO: ~660μs -> ~660μs NEO/pylite: ~630μs -> ~575μs (Cpy) NEO/pylite: ~505μs -> ~460μs (Cgo) NEO/pysql: ~930μs -> ~880μs (Cpy) NEO/pysql: ~810μs -> ~750μs (Cgo) NEO/go: ~430μs -> ~380μs (Cpy) NEO/go: ~215μs -> ~170μs (Cgo) NEO/go-nosha1: ~195μs -> ~150μs
-
Kirill Smelkov authored
It improves: ping56 ~ ping1472 ~ TCP1 ~92μs -> ~89μs (c -> c) TCP1 ~120μs -> ~91μs (c -> go) TCP1 ~120μs -> ~90μs (c <- c) TCP1 ~120μs -> ~90μs (go <- c) TCP1472 ~220-250μs -> ~180μs TCP4096 ~270-300μs -> ~220-230μs ZEO: ~750μs -> ~660μs NEO/pylite: ~850μs -> ~630μs (Cpy) NEO/pylite: ~640μs -> ~505μs (Cgo) NEO/pysql: ~1500μs -> ~930μs (Cpy) NEO/pysql: ~1350μs -> ~810μs (Cgo) NEO/go: ~600μs -> ~430μs (Cpy) NEO/go: ~320μs -> ~215μs (Cgo) NEO/go-nosha1: ~260μs -> ~195μs
-
Kirill Smelkov authored
-
Kirill Smelkov authored
It improves a lot: ZEO: ~660μs -> ~490μs NEO/pylite: ~750μs -> ~470μs (Cpy) NEO/pylite: ~500μs -> ~370μs (Cgo) NEO/pysql: ~1450μs -> ~780μs (Cpy) NEO/pysql: ~1000μs -> ~670μs (Cgo) NEO/go: ~450μs -> ~280μs (Cpy) NEO/go: ~90μs -> ~80μs (Cgo) NEO/go-nosha1: ~60μs -> ~55μs
-
Kirill Smelkov authored
Same as on oct04 but now we add C-states profile. C-states are not yet disabled.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
~ the same as on oct04 only we add C-states profiling. C-states are not yet disabled.
-
Kirill Smelkov authored
py timings improve a bit - ~15-20μs per python part (i.e. ~30-40μs if both client and server is in python). go-go timings stays ~ the same.
-
Kirill Smelkov authored
Timings are ~same as on sep17 but now a lot more info about system is added + C-stats profile. C-states are not yet disabled.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 04 Oct, 2017 17 commits
-
-
Kirill Smelkov authored
Like info-local but about any remote deployment.
-
Test authored
Numbers stays the same; we just added more diagnostic to log.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
for ZEO and NEO/py the time over network is ~ the same as the time over just IPC on z6001 ! for Go network makes it visible slower. Probably C-states play a big role above.
-
Kirill Smelkov authored
Seems to be similar to localhost on z6001 and is a bit _faster_ for cases where ther is interprocesses communication. This is explained by the fact that intel_idle tend not to go to deeper sleep states whenever system loadavg increases.
-
Kirill Smelkov authored
Seems to be similar to localhost on neo1.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 03 Oct, 2017 2 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 29 Sep, 2017 1 commit
-
-
Kirill Smelkov authored
-
- 28 Sep, 2017 2 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
The network: - improves much for ping (ethernet drivers are patched https://marc.info/?l=linux-netdev&m=150654466102277&w=2) - becomes _worse_ for 1B, 1472B TCP pings - becomes better for 4K TCP pings The time for ZEO and NEO/* imrpoves by 100-200μs. It has to be understood what happens with TCP and then maybe other timings will improve more.
-