- 12 Mar, 2019 13 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 11 Mar, 2019 4 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 08 Mar, 2019 7 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
If we keep it activated, there could be a problem at Resync time - if Resync sees that old zhead.At is outside of DB.δtail coverage it will invalidate all zhead.cache objects, not only changed objects. That means that even non-changed and being kept active ZBigFile will be invalidated and oops - PInvalidate will panic. We could avoid it via deactivating all ZBigFiles on each transaction update, but that can be too expensive if there are many ZBigFiles. We could avoid the problem another way - add to ZODB/go API to request that DB.δtail covers particular connection. That in turn would mean we have to also extend ZODB/go API to release connection from affecting DB via such constraint. Even if first step could be done via e.g. another flag, the second step - release - is not very clear - we already have connection "release" on transaction completion and adding e.g. conn.Close() in addition to that would be ambiguous for users. Also, if wcfs is slow to process invalidations for some reason, such constraint would mean DB.δtail would ↑ indefinitely. -> we can solve the problem in another way: don't keep ZBigFile always activated and just do activation/deactivation as if working with ZODB objects regularly. This does not add any complications to code flow and from the performance point of view we can practically avoid the slowdown by teaching zodbCacheControl to also pin ZBigFile in live cache.
-
Kirill Smelkov authored
-
- 07 Mar, 2019 1 commit
-
-
Kirill Smelkov authored
-
- 06 Mar, 2019 1 commit
-
-
Kirill Smelkov authored
-
- 05 Mar, 2019 1 commit
-
-
Kirill Smelkov authored
-
- 04 Mar, 2019 3 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
If we don't early-disable it, we can see a situation where when handling invalidations wcfs calls open("@revX/bigfile/...") to upload cache data there, go runtime tries to use epoll on that fd, and it gets stuck as described in the commit referenced in comments. In particular the deadlock was easy to trigger with nproc=1 environment (either a VM with 1 cpu, or e.g. under `taskset -c 0`) Bug reported by @romain.
-
Kirill Smelkov authored
-
- 01 Mar, 2019 1 commit
-
-
Kirill Smelkov authored
-
- 28 Feb, 2019 1 commit
-
-
Kirill Smelkov authored
* master: t/qemu-runlinux: Mount bpf and fusectl filesystems t/qemu-runlinux: Issue terminal resize before running program t/qemu-runlinux: Don't propagate $TERM in graphics mode
-
- 27 Feb, 2019 3 commits
-
-
Kirill Smelkov authored
bpf is needed for tools like bpftrace. fusectl is needed to observe things like /sys/fs/fuse/connections/X/waiting.
-
Kirill Smelkov authored
Else program inside sees default terminal settings to be 80x25, while after the patch it sees correct settings. Still something is not completely right with terminal settings, as e.g. vsplit is not working correctly in vim (vertical ruler is not straight).
-
Kirill Smelkov authored
Graphics mode runs in another window with its own terminal emulation, so propagating e.g. TERM=xterm to where it is emulated as TERM=linux is not correct.
-
- 25 Feb, 2019 2 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 22 Feb, 2019 3 commits
-
-
Kirill Smelkov authored
and gives exactly the same data as non-wcfs Wendelin.core: ---- 8< ---- (neo) (z-dev) (g.env) kirr@deco:~/src/wendelin/wendelin.core$ free -h total used free shared buff/cache available Mem: 7,5Gi 931Mi 613Mi 194Mi 6,0Gi 6,1Gi Swap: 0B 0B 0B (neo) (z-dev) (g.env) kirr@deco:~/src/wendelin/wendelin.core$ time ./demo/demo_zbigarray.py gen 1.fs I: RAM: 7.47GB I: WORK: 14.94GB gen signal t=0...2.00e+09 float64 (= 14.94GB) gen signal blk [0:4194304] (0.2%) gen signal blk [4194304:8388608] (0.4%) gen signal blk [8388608:12582912] (0.6%) gen signal blk [12582912:16777216] (0.8%) gen signal blk [16777216:20971520] (1.0%) gen signal blk [20971520:25165824] (1.3%) ... gen signal blk [1988100096:1992294400] (99.4%) gen signal blk [1992294400:1996488704] (99.6%) gen signal blk [1996488704:2000683008] (99.8%) gen signal blk [2000683008:2004649984] (100.0%) VIRT: 457 MB RSS: 259MB real 7m51,814s user 2m19,001s sys 0m42,615s (neo) (z-dev) (g.env) kirr@deco:~/src/wendelin/wendelin.core$ time ./demo/demo_zbigarray.py read 1.fs I: RAM: 7.47GB sig: 2004649984 float64 (= 14.94GB) <sig>: 2.3794727102747662e-08 S(sig): 47.70009930580747 VIRT: 245 MB RSS: 49MB real 2m36,006s user 0m14,773s sys 0m59,467s (neo) (z-dev) (g.env) kirr@deco:~/src/wendelin/wendelin.core$ time WENDELIN_CORE_VIRTMEM=r:wcfs+w:uvmm ./demo/demo_zbigarray.py read 1.fs I: RAM: 7.47GB sig: 2004649984 float64 (= 14.94GB) wcfs: 2019/02/22 21:33:48 zodb: FIXME: open file:///home/kirr/src/wendelin/wendelin.core/1.fs: cache is not ready for invalidations -> NoCache forced db.openx 03cdd855e0d73622 nopool=true ; δtail (03cdd855e0d73622, 03cdd855e0d73622] db.openx 03cdd855e0d73622 nopool=false ; δtail (03cdd855e0d73622, 03cdd855e0d73622] W0222 21:35:20.282163 6697 misc.go:84] /: lookup "lib": invalid argument: not @rev W0222 21:35:20.334896 6697 misc.go:84] /: lookup "libX11.so": invalid argument: not @rev W0222 21:35:20.340128 6697 misc.go:84] /: lookup "libX11.so.so": invalid argument: not @rev W0222 21:35:20.342492 6697 misc.go:84] /: lookup "libX11.so.la": invalid argument: not @rev <sig>: 2.3794727102747662e-08 S(sig): 47.70009930580747 VIRT: 371 MB RSS: 37MB real 6m8,611s user 0m10,167s sys 0m21,964s ---- 8< ---- Wcfs was not yet optimized at all. Offhand `perf top` was showing lots of time is spent in garbage collector, but maybe something to also debug on FUSE-level latencies. In any way FileStorage case is not very representative and as for ZEO/NEO case there are network requests to be made and they start to dominate the latency to access one page/object.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-