- 03 Oct, 2019 3 commits
-
-
Kirill Smelkov authored
We will soon need this functionality for sync_test.py to import tests from _sync_test.pyx . In the future many modules will need to import X_test.pyx from X_test.py as well.
-
Kirill Smelkov authored
- it is in Go style to panic on memory allocation failure. - _makechan can already panic, instead of returning NULL, on e.g. ch._mu initialization failure. - we were already not checking in several places for NULL after _makechan() call (e.g. in golang/runtime/libgolang_test_c.c). This switches _makechan to panic on memory allocation failure and settles on style of future C-level API calls to panic instead of returning NULL on failure.
-
Kirill Smelkov authored
sys is imported in the beginning of main, so there is no need to import it the second time in the middle of main. The bug was there since gpython day1 - since 32a21d5b (gpython: Python interpreter with support for lightweight threads).
-
- 17 Sep, 2019 11 commits
-
-
Kirill Smelkov authored
This release is bugfix-only. Compared to pygolang v0.0.3 (4ca65816) the change in speed is likely within noise: (on i7@2.6GHz) thread runtime: name old time/op new time/op delta go 18.3µs ± 1% 18.3µs ± 1% ~ (p=0.218 n=10+10) chan 2.97µs ± 5% 2.97µs ± 8% ~ (p=0.781 n=10+10) select 3.59µs ± 2% 3.55µs ± 5% ~ (p=0.447 n=9+10) def 56.0ns ± 0% 55.0ns ± 0% -1.79% (p=0.000 n=10+10) func_def 43.7µs ± 1% 43.8µs ± 1% +0.35% (p=0.029 n=10+10) call 65.0ns ± 0% 62.3ns ± 1% -4.15% (p=0.000 n=10+10) func_call 1.06µs ± 1% 1.04µs ± 0% -1.26% (p=0.000 n=10+8) try_finally 137ns ± 1% 137ns ± 0% ~ (p=1.000 n=10+10) defer 2.32µs ± 0% 2.33µs ± 1% +0.43% (p=0.000 n=9+10) workgroup_empty 37.6µs ± 1% 37.1µs ± 2% -1.29% (p=0.003 n=10+10) workgroup_raise 47.9µs ± 1% 47.6µs ± 0% -0.63% (p=0.001 n=9+9) gevent runtime: name old time/op new time/op delta go 15.8µs ± 0% 16.1µs ± 1% +2.18% (p=0.000 n=9+10) chan 7.36µs ± 0% 7.21µs ± 0% -1.97% (p=0.000 n=8+10) select 10.4µs ± 0% 10.5µs ± 0% +0.71% (p=0.000 n=10+10) def 57.0ns ± 0% 55.0ns ± 0% -3.51% (p=0.000 n=10+10) func_def 43.3µs ± 1% 44.1µs ± 2% +1.81% (p=0.000 n=10+10) call 66.0ns ± 0% 65.0ns ± 0% -1.52% (p=0.000 n=10+10) func_call 1.04µs ± 1% 1.06µs ± 1% +1.48% (p=0.000 n=10+10) try_finally 137ns ± 1% 136ns ± 0% -1.31% (p=0.000 n=10+10) defer 2.32µs ± 0% 2.31µs ± 1% ~ (p=0.472 n=8+10) workgroup_empty 56.0µs ± 0% 55.7µs ± 0% -0.49% (p=0.000 n=10+10) workgroup_raise 71.3µs ± 1% 71.7µs ± 1% +0.62% (p=0.001 n=10+10)
-
Kirill Smelkov authored
Like we already do for e.g. _chan, to increase probability that if something is used after free we get a NULL-dereference crash instead of more hard-to-understand segfault.
-
Kirill Smelkov authored
Similarly to situation described in dcf4ebd1 (libgolang: Fix chan.close to dequeue subscribers atomically), select can be also accessing a channel object at the time of wakeup when that channel could be already destroyed: select queues waiters to channels recv/send queues and upon wakeup needs to dequeue them. This requires locking channels, not to mention that a channel destroy with non-empty subscribers queue will trigger bug panic. Contrary to the fix for recv, we cannot rework select not to access channel objects after wakeup, because for select upon wakeup all queued channels could be already destroyed, not only selected one. Thus the fix here is to incref/decref the channels for the duration where we need to access them. The bug was not caught by existing tests and was noted while doing libgolang.cpp review for concurrency issues. With added test (hereby fix is served by a bit amended _test_close_wakeup_all) the bug, if not fixed, renders itself as e.g. the following under TSAN: WARNING: ThreadSanitizer: data race (pid=4421) Write of size 8 at 0x7b1400000650 by thread T9: #0 free ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:649 (libtsan.so.0+0x2b46a) #1 free ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:643 (libtsan.so.0+0x2b46a) #2 golang::_chan::decref() golang/runtime/libgolang.cpp:479 (liblibgolang.so.0.1+0x4822) #3 _chanxdecref golang/runtime/libgolang.cpp:461 (liblibgolang.so.0.1+0x487a) #4 golang::chan<int>::operator=(decltype(nullptr)) golang/libgolang.h:296 (_golang_test.so+0x14cf9) #5 operator() golang/runtime/libgolang_test.cpp:304 (_golang_test.so+0x14cf9) #6 __invoke_impl<void, __test_close_wakeup_all(bool)::<lambda()>&> /usr/include/c++/8/bits/invoke.h:60 (_golang_test.so+0x14cf9) #7 __invoke<__test_close_wakeup_all(bool)::<lambda()>&> /usr/include/c++/8/bits/invoke.h:95 (_golang_test.so+0x14cf9) #8 __call<void> /usr/include/c++/8/functional:400 (_golang_test.so+0x14cf9) #9 operator()<> /usr/include/c++/8/functional:484 (_golang_test.so+0x14cf9) #10 _M_invoke /usr/include/c++/8/bits/std_function.h:297 (_golang_test.so+0x14cf9) #11 std::function<void ()>::operator()() const /usr/include/c++/8/bits/std_function.h:687 (_golang_test.so+0x1850c) #12 operator() golang/libgolang.h:273 (_golang_test.so+0x1843a) #13 _FUN golang/libgolang.h:271 (_golang_test.so+0x1843a) #14 <null> <null> (python2.7+0x1929e3) Previous read of size 8 at 0x7b1400000650 by thread T10: #0 golang::Sema::acquire() golang/runtime/libgolang.cpp:168 (liblibgolang.so.0.1+0x413a) #1 golang::Mutex::lock() golang/runtime/libgolang.cpp:179 (liblibgolang.so.0.1+0x424a) #2 operator() golang/runtime/libgolang.cpp:1044 (liblibgolang.so.0.1+0x424a) #3 _M_invoke /usr/include/c++/8/bits/std_function.h:297 (liblibgolang.so.0.1+0x424a) #4 std::function<void ()>::operator()() const /usr/include/c++/8/bits/std_function.h:687 (liblibgolang.so.0.1+0x5f07) #5 golang::_deferred::~_deferred() golang/runtime/libgolang.cpp:215 (liblibgolang.so.0.1+0x5f07) #6 __chanselect2 golang/runtime/libgolang.cpp:1044 (liblibgolang.so.0.1+0x5f07) #7 _chanselect2<true> golang/runtime/libgolang.cpp:968 (liblibgolang.so.0.1+0x6665) #8 _chanselect golang/runtime/libgolang.cpp:963 (liblibgolang.so.0.1+0x6665) #9 select golang/libgolang.h:386 (_golang_test.so+0x14fc1) #10 operator() golang/runtime/libgolang_test.cpp:320 (_golang_test.so+0x14fc1) #11 __invoke_impl<void, __test_close_wakeup_all(bool)::<lambda()>&> /usr/include/c++/8/bits/invoke.h:60 (_golang_test.so+0x14fc1) #12 __invoke<__test_close_wakeup_all(bool)::<lambda()>&> /usr/include/c++/8/bits/invoke.h:95 (_golang_test.so+0x14fc1) #13 __call<void> /usr/include/c++/8/functional:400 (_golang_test.so+0x14fc1) #14 operator()<> /usr/include/c++/8/functional:484 (_golang_test.so+0x14fc1) #15 _M_invoke /usr/include/c++/8/bits/std_function.h:297 (_golang_test.so+0x14fc1) #16 std::function<void ()>::operator()() const /usr/include/c++/8/bits/std_function.h:687 (_golang_test.so+0x1850c) #17 operator() golang/libgolang.h:273 (_golang_test.so+0x183da) #18 _FUN golang/libgolang.h:271 (_golang_test.so+0x183da) #19 <null> <null> (python2.7+0x1929e3) Thread T9 (tid=4661, running) created by main thread at: #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:915 (libtsan.so.0+0x2be1b) #1 PyThread_start_new_thread <null> (python2.7+0x19299f) #2 _taskgo golang/runtime/libgolang.cpp:123 (liblibgolang.so.0.1+0x3f98) #3 go<__test_close_wakeup_all(bool)::<lambda()> > golang/libgolang.h:271 (_golang_test.so+0x16c94) #4 __test_close_wakeup_all(bool) golang/runtime/libgolang_test.cpp:298 (_golang_test.so+0x16c94) #5 _test_close_wakeup_all_vsselect() golang/runtime/libgolang_test.cpp:342 (_golang_test.so+0x16f64) #6 __pyx_pf_6golang_12_golang_test_24test_close_wakeup_all_vsselect golang/_golang_test.cpp:4013 (_golang_test.so+0xd92a) #7 __pyx_pw_6golang_12_golang_test_25test_close_wakeup_all_vsselect golang/_golang_test.cpp:3978 (_golang_test.so+0xd92a) #8 PyEval_EvalFrameEx <null> (python2.7+0xf68b4) Thread T10 (tid=4662, running) created by main thread at: #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:915 (libtsan.so.0+0x2be1b) #1 PyThread_start_new_thread <null> (python2.7+0x19299f) #2 _taskgo golang/runtime/libgolang.cpp:123 (liblibgolang.so.0.1+0x3f98) #3 go<__test_close_wakeup_all(bool)::<lambda()> > golang/libgolang.h:271 (_golang_test.so+0x16d96) #4 __test_close_wakeup_all(bool) golang/runtime/libgolang_test.cpp:315 (_golang_test.so+0x16d96) #5 _test_close_wakeup_all_vsselect() golang/runtime/libgolang_test.cpp:342 (_golang_test.so+0x16f64) #6 __pyx_pf_6golang_12_golang_test_24test_close_wakeup_all_vsselect golang/_golang_test.cpp:4013 (_golang_test.so+0xd92a) #7 __pyx_pw_6golang_12_golang_test_25test_close_wakeup_all_vsselect golang/_golang_test.cpp:3978 (_golang_test.so+0xd92a) #8 PyEval_EvalFrameEx <null> (python2.7+0xf68b4) and reliably crashes under regular builds.
-
Kirill Smelkov authored
During the second phase of select - after all cases were polled and noone was found to be ready - cases are queued to corresponding channels recv and send queues. While that queueing is in progress, a case, that was already queued, could win (see f0b592b4 "select: Don't let both a queued and a tried cases win at the same time" for details). If such won case is detected, we break out of queuing loop, but currently don't wait for that case to become ready. This is a bug, because when a case is marked as won, its data is not yet copied - for example for won recv case if we don't wait for that case to become ready, we will be returning from select while corresponding *prx and *rxok for recv waiter is still being copied in progress. An example TSAN reports for this bug are as follows: (1) WARNING: ThreadSanitizer: data race (pid=8223) Read of size 1 at 0x7b1800000a48 by main thread: #0 __chanselect2 golang/runtime/libgolang.cpp:1112 (liblibgolang.so.0.1+0x5fd6) #1 _chanselect2<true> golang/runtime/libgolang.cpp:949 (liblibgolang.so.0.1+0x6665) #2 _chanselect golang/runtime/libgolang.cpp:944 (liblibgolang.so.0.1+0x6665) #3 __pyx_f_6golang_7_golang__chanselect_pyexc golang/_golang.cpp:5896 (_golang.so+0x1deac) #4 __pyx_pf_6golang_7_golang_4pyselect golang/_golang.cpp:4935 (_golang.so+0x1deac) #5 __pyx_pw_6golang_7_golang_5pyselect golang/_golang.cpp:4355 (_golang.so+0x1deac) #6 PyEval_EvalFrameEx <null> (python2.7+0xf0e49) Previous write of size 1 at 0x7b1800000a48 by thread T57: #0 golang::_RecvSendWaiting::wakeup(bool) golang/runtime/libgolang.cpp:346 (liblibgolang.so.0.1+0x459d) #1 golang::_chan::_tryrecv(void*, bool*) golang/runtime/libgolang.cpp:730 (liblibgolang.so.0.1+0x511d) #2 __chanselect2 golang/runtime/libgolang.cpp:1074 (liblibgolang.so.0.1+0x5d4b) #3 _chanselect2<true> golang/runtime/libgolang.cpp:949 (liblibgolang.so.0.1+0x6665) #4 _chanselect golang/runtime/libgolang.cpp:944 (liblibgolang.so.0.1+0x6665) #5 __pyx_f_6golang_7_golang__chanselect_pyexc golang/_golang.cpp:5896 (_golang.so+0x1deac) #6 __pyx_pf_6golang_7_golang_4pyselect golang/_golang.cpp:4935 (_golang.so+0x1deac) #7 __pyx_pw_6golang_7_golang_5pyselect golang/_golang.cpp:4355 (_golang.so+0x1deac) #8 PyEval_EvalFrameEx <null> (python2.7+0xf0e49) #9 <null> <null> (python2.7+0x1929e3) Location is heap block of size 96 at 0x7b1800000a20 allocated by main thread: #0 calloc ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:623 (libtsan.so.0+0x2b323) #1 calloc ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:618 (libtsan.so.0+0x2b323) #2 __chanselect2 golang/runtime/libgolang.cpp:1018 (liblibgolang.so.0.1+0x5b8c) #3 _chanselect2<true> golang/runtime/libgolang.cpp:949 (liblibgolang.so.0.1+0x6665) #4 _chanselect golang/runtime/libgolang.cpp:944 (liblibgolang.so.0.1+0x6665) #5 __pyx_f_6golang_7_golang__chanselect_pyexc golang/_golang.cpp:5896 (_golang.so+0x1deac) #6 __pyx_pf_6golang_7_golang_4pyselect golang/_golang.cpp:4935 (_golang.so+0x1deac) #7 __pyx_pw_6golang_7_golang_5pyselect golang/_golang.cpp:4355 (_golang.so+0x1deac) #8 PyEval_EvalFrameEx <null> (python2.7+0xf0e49) Thread T57 (tid=13758, finished) created by main thread at: #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:915 (libtsan.so.0+0x2be1b) #1 PyThread_start_new_thread <null> (python2.7+0x19299f) #2 _taskgo golang/runtime/libgolang.cpp:123 (liblibgolang.so.0.1+0x3f98) #3 __pyx_f_6golang_7_golang__taskgo_pyexc golang/_golang.cpp:5926 (_golang.so+0x16f7e) #4 __pyx_pf_6golang_7_golang_2pygo golang/_golang.cpp:2399 (_golang.so+0x16f7e) #5 __pyx_pw_6golang_7_golang_3pygo golang/_golang.cpp:2324 (_golang.so+0x16f7e) #6 PyEval_EvalFrameEx <null> (python2.7+0xf0e49) (2) WARNING: ThreadSanitizer: data race (pid=14185) Write of size 8 at 0x7b1800003000 by thread T95: #0 free ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:649 (libtsan.so.0+0x2b46a) #1 free ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:643 (libtsan.so.0+0x2b46a) #2 operator() golang/runtime/libgolang.cpp:1023 (liblibgolang.so.0.1+0x44bd) #3 _M_invoke /usr/include/c++/8/bits/std_function.h:297 (liblibgolang.so.0.1+0x44bd) #4 std::function<void ()>::operator()() const /usr/include/c++/8/bits/std_function.h:687 (liblibgolang.so.0.1+0x5fd8) #5 golang::_deferred::~_deferred() golang/runtime/libgolang.cpp:215 (liblibgolang.so.0.1+0x5fd8) #6 __chanselect2 golang/runtime/libgolang.cpp:1023 (liblibgolang.so.0.1+0x5fd8) #7 _chanselect2<true> golang/runtime/libgolang.cpp:949 (liblibgolang.so.0.1+0x6736) #8 _chanselect golang/runtime/libgolang.cpp:944 (liblibgolang.so.0.1+0x6736) #9 __pyx_f_6golang_7_golang__chanselect_pyexc golang/_golang.cpp:5896 (_golang.cpython-36m-x86_64-linux-gnu.so+0x1e562) #10 __pyx_pf_6golang_7_golang_4pyselect golang/_golang.cpp:4935 (_golang.cpython-36m-x86_64-linux-gnu.so+0x1e562) #11 __pyx_pw_6golang_7_golang_5pyselect golang/_golang.cpp:4355 (_golang.cpython-36m-x86_64-linux-gnu.so+0x1e562) #12 _PyCFunction_FastCallDict Objects/methodobject.c:231 (python3.6+0xd4db9) #13 pythread_wrapper Python/thread_pthread.h:205 (python3.6+0x6c5d6) Previous read of size 8 at 0x7b1800003000 by main thread: #0 golang::_RecvSendWaiting::wakeup(bool) golang/runtime/libgolang.cpp:347 (liblibgolang.so.0.1+0x4769) #1 golang::_chan::_trysend(void const*) golang/runtime/libgolang.cpp:661 (liblibgolang.so.0.1+0x5781) #2 _chanselect golang/runtime/libgolang.cpp:901 (liblibgolang.so.0.1+0x64d9) #3 __pyx_f_6golang_7_golang__chanselect_pyexc golang/_golang.cpp:5896 (_golang.cpython-36m-x86_64-linux-gnu.so+0x1e562) #4 __pyx_pf_6golang_7_golang_4pyselect golang/_golang.cpp:4935 (_golang.cpython-36m-x86_64-linux-gnu.so+0x1e562) #5 __pyx_pw_6golang_7_golang_5pyselect golang/_golang.cpp:4355 (_golang.cpython-36m-x86_64-linux-gnu.so+0x1e562) #6 _PyCFunction_FastCallDict Objects/methodobject.c:231 (python3.6+0xd4db9) Thread T95 (tid=16547, running) created by main thread at: #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:915 (libtsan.so.0+0x2be1b) #1 PyThread_start_new_thread Python/thread_pthread.h:252 (python3.6+0x6c67e) #2 _taskgo golang/runtime/libgolang.cpp:123 (liblibgolang.so.0.1+0x4158) #3 __pyx_f_6golang_7_golang__taskgo_pyexc golang/_golang.cpp:5926 (_golang.cpython-36m-x86_64-linux-gnu.so+0x1a9b5) #4 __pyx_pf_6golang_7_golang_2pygo golang/_golang.cpp:2399 (_golang.cpython-36m-x86_64-linux-gnu.so+0x1a9b5) #5 __pyx_pw_6golang_7_golang_3pygo golang/_golang.cpp:2324 (_golang.cpython-36m-x86_64-linux-gnu.so+0x1a9b5) #6 _PyCFunction_FastCallDict Objects/methodobject.c:231 (python3.6+0xd4db9) -> Fix it by always waiting for WaitGroup's won case to become ready. The bug was introduced in 3b241983 (Port/move channels to C/C++/Pyx). Before that - when channels were implemented at Python level, we were always waiting on select's group. Added test catches the bug on all - even not under TSAN - builds.
-
Kirill Smelkov authored
Currently chan.close iterates all send/recv subscribers unlocking/relocking the channel for each and notifying dequeued subscriber with channel unlocked. This leads to that even if channel had only one subscriber, chan.close accesses chan._mu again - after notifying that subscriber. That in turn means that an idiom where e.g. a done channel is passed to worker, which worker closes at the end, and main task waiting on the done and destroying done right after wakeup cannot work - because close, internally, accesses already destroyed channel as the following TSAN report shows for _test_go_c: WARNING: ThreadSanitizer: data race (pid=7143) Write of size 8 at 0x7b1400000650 by main thread: #0 free ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:649 (libtsan.so.0+0x2b46a) #1 free ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:643 (libtsan.so.0+0x2b46a) #2 golang::_chan::decref() golang/runtime/libgolang.cpp:470 (liblibgolang.so.0.1+0x47f2) #3 _chanxdecref golang/runtime/libgolang.cpp:452 (liblibgolang.so.0.1+0x484a) #4 _test_go_c golang/runtime/libgolang_test_c.c:86 (_golang_test.so+0x13a2e) #5 __pyx_pf_6golang_12_golang_test_12test_go_c golang/_golang_test.cpp:3340 (_golang_test.so+0xcbaa) #6 __pyx_pw_6golang_12_golang_test_13test_go_c golang/_golang_test.cpp:3305 (_golang_test.so+0xcbaa) #7 PyEval_EvalFrameEx <null> (python2.7+0xf68b4) Previous read of size 8 at 0x7b1400000650 by thread T8: #0 golang::Sema::acquire() golang/runtime/libgolang.cpp:164 (liblibgolang.so.0.1+0x410a) #1 golang::Mutex::lock() golang/runtime/libgolang.cpp:175 (liblibgolang.so.0.1+0x4c82) #2 golang::_chan::close() golang/runtime/libgolang.cpp:754 (liblibgolang.so.0.1+0x4c82) #3 _chanclose golang/runtime/libgolang.cpp:732 (liblibgolang.so.0.1+0x4d1a) #4 _work golang/runtime/libgolang_test_c.c:92 (_golang_test.so+0x136cc) #5 <null> <null> (python2.7+0x1929e3) Thread T8 (tid=7311, finished) created by main thread at: #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:915 (libtsan.so.0+0x2be1b) #1 PyThread_start_new_thread <null> (python2.7+0x19299f) #2 _taskgo golang/runtime/libgolang.cpp:119 (liblibgolang.so.0.1+0x3f68) #3 _test_go_c golang/runtime/libgolang_test_c.c:84 (_golang_test.so+0x13a1c) #4 __pyx_pf_6golang_12_golang_test_12test_go_c golang/_golang_test.cpp:3340 (_golang_test.so+0xcbaa) #5 __pyx_pw_6golang_12_golang_test_13test_go_c golang/_golang_test.cpp:3305 (_golang_test.so+0xcbaa) #6 PyEval_EvalFrameEx <null> (python2.7+0xf68b4) -> Fix close to dequeue all channel's subscribers atomically, and notify them all after channel is unlocked and _no_ longer accessed. Close was already working this way when channels were done at Python level, but in 3b241983 (Port/move channels to C/C++/Pyx) I introduced this bug while trying to avoid additional memory allocation in close. Added test catches the bug on all - even not under TSAN - builds. ---- Added test also reveals another bug: recv<onstack=false> uses channel after wakeup, and, as at the time of wakeup the channel could be already destroyed, that segfaults. Fix it by pre-reading in recv everything needed from _chan object before going to sleep. This fix cannot go separately from close fix, as fixed close is required for recv-uses-chan-after-wakeup testcase.
-
Kirill Smelkov authored
This will be needed in the next patch.
-
Kirill Smelkov authored
When lambda captures stack-resided variable by reference, it actually remembers address of that variable and inside lambda code dereferences that address on variable use. The lifetime of all spawned goroutines in libgolang_test is subset of test function driver lifetime, so capturing by reference should be safe in that situation on the first glance. However for that to work, it is required that stacks of both goroutines - the main goroutine and spawned goroutine - must be live at the same time, so that spawned goroutine could safely retrieve a reference-captured variable located on the main goroutine stack. This works for thread runtime, but is known not to work for gevent runtime, where inactive goroutine stack is swapped onto heap and is generally considered "dead" while that goroutine is parked (see "Implementation note" in 3b241983 "Port/move channels to C/C++/Pyx" for details about this). -> Fix the test by capturing by value in lambdas. What we capture is usually chan<T> object, which itself is a pointer, so it should not make a big difference in efficiency. It is also more safe to capture channels by value, since that automatically incref/decref them and adds extra protection wrt lifetime management bugs. NOTE sending/receiving via channels from/to stack-based variables is always safe - for both thread and gevent runtimes, as channels implementation explicitly cares for this to work. Once again "Implementation note" in 3b241983 has the details.
-
Kirill Smelkov authored
Give what() to PanicError so that uncaught panics give proper message on std::terminate_handler crash instead of printing just "std::exception", for example terminate called after throwing an instance of 'golang::PanicError' what(): chan: decref: free: recvq not empty instead of terminate called after throwing an instance of 'golang::PanicError' what(): std::exception
-
Kirill Smelkov authored
- ThreadSanitizer helps to detect races and some memory errors, - AddressSanitizer helps to detect memory errors, - Python debug builds help to detect e.g reference counting errors. Adding all those tools to testing coverage discovers e.g. the following bugs (not a full list): ---- 8< ---- py27-thread-tsan: WARNING: ThreadSanitizer: data race (pid=7143) Write of size 8 at 0x7b1400000650 by main thread: #0 free ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:649 (libtsan.so.0+0x2b46a) #1 free ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:643 (libtsan.so.0+0x2b46a) #2 golang::_chan::decref() golang/runtime/libgolang.cpp:470 (liblibgolang.so.0.1+0x47f2) #3 _chanxdecref golang/runtime/libgolang.cpp:452 (liblibgolang.so.0.1+0x484a) #4 _test_go_c golang/runtime/libgolang_test_c.c:86 (_golang_test.so+0x13a2e) #5 __pyx_pf_6golang_12_golang_test_12test_go_c golang/_golang_test.cpp:3340 (_golang_test.so+0xcbaa) #6 __pyx_pw_6golang_12_golang_test_13test_go_c golang/_golang_test.cpp:3305 (_golang_test.so+0xcbaa) #7 PyEval_EvalFrameEx <null> (python2.7+0xf68b4) Previous read of size 8 at 0x7b1400000650 by thread T8: #0 golang::Sema::acquire() golang/runtime/libgolang.cpp:164 (liblibgolang.so.0.1+0x410a) #1 golang::Mutex::lock() golang/runtime/libgolang.cpp:175 (liblibgolang.so.0.1+0x4c82) #2 golang::_chan::close() golang/runtime/libgolang.cpp:754 (liblibgolang.so.0.1+0x4c82) #3 _chanclose golang/runtime/libgolang.cpp:732 (liblibgolang.so.0.1+0x4d1a) #4 _work golang/runtime/libgolang_test_c.c:92 (_golang_test.so+0x136cc) #5 <null> <null> (python2.7+0x1929e3) Thread T8 (tid=7311, finished) created by main thread at: #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:915 (libtsan.so.0+0x2be1b) #1 PyThread_start_new_thread <null> (python2.7+0x19299f) #2 _taskgo golang/runtime/libgolang.cpp:119 (liblibgolang.so.0.1+0x3f68) #3 _test_go_c golang/runtime/libgolang_test_c.c:84 (_golang_test.so+0x13a1c) #4 __pyx_pf_6golang_12_golang_test_12test_go_c golang/_golang_test.cpp:3340 (_golang_test.so+0xcbaa) #5 __pyx_pw_6golang_12_golang_test_13test_go_c golang/_golang_test.cpp:3305 (_golang_test.so+0xcbaa) #6 PyEval_EvalFrameEx <null> (python2.7+0xf68b4) py37-thread-asan: ==22205==ERROR: AddressSanitizer: heap-use-after-free on address 0x607000002cd0 at pc 0x7fd3732a7679 bp 0x7fd3723c8c50 sp 0x7fd3723c8c48 READ of size 8 at 0x607000002cd0 thread T7 #0 0x7fd3732a7678 in golang::Sema::acquire() golang/runtime/libgolang.cpp:164 #1 0x7fd3732a8644 in golang::Mutex::lock() golang/runtime/libgolang.cpp:175 #2 0x7fd3732a8644 in golang::_chan::close() golang/runtime/libgolang.cpp:754 #3 0x7fd3724004b2 in golang::chan<golang::structZ>::close() const golang/libgolang.h:323 #4 0x7fd3724004b2 in operator() golang/runtime/libgolang_test.cpp:262 #5 0x7fd3724004b2 in __invoke_impl<void, _test_chan_vs_stackdeadwhileparked()::<lambda()>&> /usr/include/c++/8/bits/invoke.h:60 #6 0x7fd3724004b2 in __invoke<_test_chan_vs_stackdeadwhileparked()::<lambda()>&> /usr/include/c++/8/bits/invoke.h:95 #7 0x7fd3724004b2 in __call<void> /usr/include/c++/8/functional:400 #8 0x7fd3724004b2 in operator()<> /usr/include/c++/8/functional:484 #9 0x7fd3724004b2 in _M_invoke /usr/include/c++/8/bits/std_function.h:297 #10 0x7fd3723fdc6e in std::function<void ()>::operator()() const /usr/include/c++/8/bits/std_function.h:687 #11 0x7fd3723fdc6e in operator() golang/libgolang.h:273 #12 0x7fd3723fdc6e in _FUN golang/libgolang.h:271 #13 0x62ddf3 (/home/kirr/src/tools/go/pygolang-master/.tox/py37-thread-asan/bin/python3+0x62ddf3) #14 0x7fd377393fa2 in start_thread /build/glibc-vjB4T1/glibc-2.28/nptl/pthread_create.c:486 #15 0x7fd376eda4ce in clone (/lib/x86_64-linux-gnu/libc.so.6+0xf94ce) 0x607000002cd0 is located 16 bytes inside of 72-byte region [0x607000002cc0,0x607000002d08) freed by thread T0 here: #0 0x7fd377519fb0 in __interceptor_free (/usr/lib/x86_64-linux-gnu/libasan.so.5+0xe8fb0) #1 0x7fd372401335 in golang::chan<golang::structZ>::~chan() golang/libgolang.h:292 #2 0x7fd372401335 in _test_chan_vs_stackdeadwhileparked() golang/runtime/libgolang_test.cpp:222 previously allocated by thread T0 here: #0 0x7fd37751a518 in calloc (/usr/lib/x86_64-linux-gnu/libasan.so.5+0xe9518) #1 0x7fd3732a7d0b in zalloc golang/runtime/libgolang.cpp:1185 #2 0x7fd3732a7d0b in _makechan golang/runtime/libgolang.cpp:413 Thread T7 created by T0 here: #0 0x7fd377481db0 in __interceptor_pthread_create (/usr/lib/x86_64-linux-gnu/libasan.so.5+0x50db0) #1 0x62df39 in PyThread_start_new_thread (/home/kirr/src/tools/go/pygolang-master/.tox/py37-thread-asan/bin/python3+0x62df39) SUMMARY: AddressSanitizer: heap-use-after-free golang/runtime/libgolang.cpp:164 in golang::Sema::acquire() Shadow bytes around the buggy address: 0x0c0e7fff8540: fa fa fa fa fd fd fd fd fd fd fd fd fd fa fa fa 0x0c0e7fff8550: fa fa fd fd fd fd fd fd fd fd fd fa fa fa fa fa 0x0c0e7fff8560: fd fd fd fd fd fd fd fd fd fd fa fa fa fa fd fd 0x0c0e7fff8570: fd fd fd fd fd fd fd fa fa fa fa fa fd fd fd fd 0x0c0e7fff8580: fd fd fd fd fd fa fa fa fa fa fd fd fd fd fd fd =>0x0c0e7fff8590: fd fd fd fa fa fa fa fa fd fd[fd]fd fd fd fd fd 0x0c0e7fff85a0: fd fa fa fa fa fa 00 00 00 00 00 00 00 00 00 fa 0x0c0e7fff85b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c0e7fff85c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c0e7fff85d0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c0e7fff85e0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb ---- 8< ---- The bugs will be addressed in the followup patches.
-
Kirill Smelkov authored
Make it clear which scope is covered by with_lock(mu): instead of implicitly having it from with_lock till end of current block with_lock(mu); ... } // end of current block make it to be the statement covered by with_lock, as if with_lock was e.g. an `if`, for example: with_lock(mu) do_something(); // mu released here or with_lock(mu) { do_smth1(); do_smth2(); } // mu released here This makes the intent in __chanselect2 more clear: it was ch->_mu.lock(); with_lock(g->_mu); ... ch->_mu.unlock(); and semantically human expects g->mu to be released _before_ ch->_mu.unlock(). However with current with_lock implementation, g->mu will be released _after_ ch->_mu.unlock(), which goes against intuition. -> By reworking with_lock implementation to cover only next statement or block of code we make sure that g->_mu will be released _before_ ch->_mu - the same way as it was until 3b241983 (Port/move channels to C/C++/Pyx): ch->_mu.lock(); with_lock(g->_mu) { ... } ch->_mu.unlock();
-
Kirill Smelkov authored
Be on safe side: both aspects - copy and move - are forbidden for all those internal classes.
-
- 16 Sep, 2019 2 commits
-
-
Kirill Smelkov authored
We already clear whole chan memory (struct _chan + channel buffer that comes after it) on channel release. However, probably due to a thinko, on creation path only struct _chan without buffer was cleared. -> Be on the safe side and clear everything. Use zalloc helper to automatically avoid bugs when size of allocation != size of clear.
-
Kirill Smelkov authored
I reported the bug (see 69db91bf "libgolang: Add internal semaphores") to CPython and PyPy upstreams. PyPy people already fixed it and the fix, if I understood correctly, should be available as part of PyPy 7.2 . Let's hope that CPython 2.7 will be also fixed. - https://bugs.python.org/issue38106 -> https://github.com/python/cpython/pull/16047 - https://bitbucket.org/pypy/pypy/issues/3072 Added Python-level test triggers the bug with high probability. Dedicated C-level Semaphore-only test is pending.
-
- 11 Sep, 2019 4 commits
-
-
Kirill Smelkov authored
With WAIT_LOCK flag and correct usage PyThread_acquire_lock should never fail, but let's be cautious and verify that it indeed succeeds to acquire the lock. The rest of external functions called by thread runtime either have void return, or their return is checked.
-
Kirill Smelkov authored
- pytest cache - pip wheel metadata - tags (from ctags) - vim swap files
-
Kirill Smelkov authored
In Python, %-formatting needs % operator, not ",".
-
Kirill Smelkov authored
chan<T> is a pointer type and e.g. send does not change the pointer - it only "modifies" channel buffer. Marking appropriate methods as const is needed so that `const chan<T>` could be used to send/receive the same way as `chan <T>` is used.
-
- 01 Sep, 2019 1 commit
-
-
Kirill Smelkov authored
ch->_mu was initialized with Sema - not Mutex - constructor. It was appearing to work because currently Mutex is a trivial wrapper around Sema.
-
- 29 Aug, 2019 19 commits
-
-
Kirill Smelkov authored
As suggested by https://github.com/jayvdb run manifest checker and oops, it finds that we forgot to include pyproject.toml for testprog/golang_pyx_user: missing from sdist: golang/pyx/testprog/golang_pyx_user/pyproject.toml suggested MANIFEST.in rules: recursive-include golang *.toml -> Fix MANIFEST.in in generic way to include golang/*/*.toml The bug was not affecting pygolang usage - only golang.pyx.build tests were failing for sdist installed pygolang.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
- Move channels implementation to be done in C++ inside libgolang. The code and logic is based on previous Python-level channels implementation, but the new code is just C++ and does not depend on Python nor GIL at all, and so works without GIL if libgolang runtime works without GIL(*). (*) for example "thread" runtime works without GIL, while "gevent" runtime acquires GIL on every semaphore acquire. New channels implementation is located in δ(libgolang.cpp). - Provide low-level C channels API to the implementation. The low-level C API was inspired by Libtask[1] and Plan9/Libthread[2]. [1] Libtask: a Coroutine Library for C and Unix. https://swtch.com/libtask. [2] http://9p.io/magic/man2html/2/thread. - Provide high-level C++ channels API that provides type-safety and automatic channel lifetime management. Overview of C and C++ APIs are in δ(libgolang.h). - Expose C++ channels API at Pyx level as Cython/nogil API so that Cython programs could use channels with ease and without need to care about lifetime management and low-level details. Overview of Cython/nogil channels API is in δ(README.rst) and δ(_golang.pxd). - Turn Python channels to be tiny wrapper around chan<PyObject>. Implementation note: - gevent case needs special care because greenlet, which gevent uses, swaps coroutine stack from C stack to heap on coroutine park, and replaces that space on C stack with stack of activated coroutine copied back from heap. This way if an object on g's stack is accessed while g is parked it would be memory of another g's stack. The channels implementation explicitly cares about this issue so that stack -> * channel send, or * -> stack channel receive work correctly. It should be noted that greenlet approach, which it inherits from stackless, is not only a bit tricky, but also comes with overhead (stack <-> heap copy), and prevents a coroutine to migrate from 1 OS thread to another OS thread as that would change addresses of on-stack things for that coroutine. As the latter property prevents to use multiple CPUs even if the program / runtime are prepared to work without GIL, it would be more logical to change gevent/greenlet to use separate stack for each coroutine. That would remove stack <-> heap copy and the need for special care in channels implementation for stack - stack sends. Such approach should be possible to implement with e.g. swapcontext or similar mechanism, and a proof of concept of such work wrapped into greenlet-compatible API exists[3]. It would be good if at some point there would be a chance to explore such approach in Pygolang context. [3] https://github.com/python-greenlet/greenlet/issues/113#issuecomment-264529838 and below Just this patch brings in the following speedup at Python level: (on i7@2.6GHz) thread runtime: name old time/op new time/op delta go 20.0µs ± 1% 15.6µs ± 1% -21.84% (p=0.000 n=10+10) chan 9.37µs ± 4% 2.89µs ± 6% -69.12% (p=0.000 n=10+10) select 20.2µs ± 4% 3.4µs ± 5% -83.20% (p=0.000 n=8+10) def 58.0ns ± 0% 60.0ns ± 0% +3.45% (p=0.000 n=8+10) func_def 43.8µs ± 1% 43.9µs ± 1% ~ (p=0.796 n=10+10) call 62.4ns ± 1% 63.5ns ± 1% +1.76% (p=0.001 n=10+10) func_call 1.06µs ± 1% 1.05µs ± 1% -0.63% (p=0.002 n=10+10) try_finally 136ns ± 0% 137ns ± 0% +0.74% (p=0.000 n=9+10) defer 2.28µs ± 1% 2.33µs ± 1% +2.34% (p=0.000 n=10+10) workgroup_empty 48.2µs ± 1% 34.1µs ± 2% -29.18% (p=0.000 n=9+10) workgroup_raise 58.9µs ± 1% 45.5µs ± 1% -22.74% (p=0.000 n=10+10) gevent runtime: name old time/op new time/op delta go 24.7µs ± 1% 15.9µs ± 1% -35.72% (p=0.000 n=9+9) chan 11.6µs ± 1% 7.3µs ± 1% -36.74% (p=0.000 n=10+10) select 22.5µs ± 1% 10.4µs ± 1% -53.73% (p=0.000 n=10+10) def 55.0ns ± 0% 55.0ns ± 0% ~ (all equal) func_def 43.6µs ± 1% 43.6µs ± 1% ~ (p=0.684 n=10+10) call 63.0ns ± 0% 64.0ns ± 0% +1.59% (p=0.000 n=10+10) func_call 1.06µs ± 1% 1.07µs ± 1% +0.45% (p=0.045 n=10+9) try_finally 135ns ± 0% 137ns ± 0% +1.48% (p=0.000 n=10+10) defer 2.31µs ± 1% 2.33µs ± 1% +0.89% (p=0.000 n=10+10) workgroup_empty 70.2µs ± 0% 55.8µs ± 0% -20.63% (p=0.000 n=10+10) workgroup_raise 90.3µs ± 0% 70.9µs ± 1% -21.51% (p=0.000 n=9+10) The whole Cython/nogil work - starting from 8fa3c15b (Start using Cython and providing Cython/nogil API) to this patch - brings in the following speedup at Python level: (on i7@2.6GHz) thread runtime: name old time/op new time/op delta go 92.9µs ± 1% 15.6µs ± 1% -83.16% (p=0.000 n=10+10) chan 13.9µs ± 1% 2.9µs ± 6% -79.14% (p=0.000 n=10+10) select 29.7µs ± 6% 3.4µs ± 5% -88.55% (p=0.000 n=10+10) def 57.0ns ± 0% 60.0ns ± 0% +5.26% (p=0.000 n=10+10) func_def 44.0µs ± 1% 43.9µs ± 1% ~ (p=0.055 n=10+10) call 63.5ns ± 1% 63.5ns ± 1% ~ (p=1.000 n=10+10) func_call 1.06µs ± 0% 1.05µs ± 1% -1.31% (p=0.000 n=10+10) try_finally 139ns ± 0% 137ns ± 0% -1.44% (p=0.000 n=10+10) defer 2.36µs ± 1% 2.33µs ± 1% -1.26% (p=0.000 n=10+10) workgroup_empty 98.4µs ± 1% 34.1µs ± 2% -65.32% (p=0.000 n=10+10) workgroup_raise 135µs ± 1% 46µs ± 1% -66.35% (p=0.000 n=10+10) gevent runtime: name old time/op new time/op delta go 68.8µs ± 1% 15.9µs ± 1% -76.91% (p=0.000 n=10+9) chan 14.8µs ± 1% 7.3µs ± 1% -50.67% (p=0.000 n=10+10) select 32.0µs ± 0% 10.4µs ± 1% -67.57% (p=0.000 n=10+10) def 58.0ns ± 0% 55.0ns ± 0% -5.17% (p=0.000 n=10+10) func_def 43.9µs ± 1% 43.6µs ± 1% -0.53% (p=0.035 n=10+10) call 63.5ns ± 1% 64.0ns ± 0% +0.79% (p=0.033 n=10+10) func_call 1.08µs ± 1% 1.07µs ± 1% -1.74% (p=0.000 n=10+9) try_finally 142ns ± 0% 137ns ± 0% -3.52% (p=0.000 n=10+10) defer 2.32µs ± 1% 2.33µs ± 1% +0.71% (p=0.005 n=10+10) workgroup_empty 90.3µs ± 0% 55.8µs ± 0% -38.26% (p=0.000 n=10+10) workgroup_raise 108µs ± 1% 71µs ± 1% -34.64% (p=0.000 n=10+10) This patch is the final patch in series to reach the goal of providing channels that could be used in Cython/nogil code. Cython/nogil channels work is dedicated to the memory of Вера Павловна Супрун[4]. [4] https://navytux.spb.ru/memory/%D0%A2%D1%91%D1%82%D1%8F%20%D0%92%D0%B5%D1%80%D0%B0.pdf#page=3
-
Kirill Smelkov authored
Copy linux/list.h from wendelin.core which copied it from util-linux.git which took it from linux.git at LGPL license state. Here are corresponding links for wendelin.core and util-linux: nexedi/wendelin.core@a0f940ad https://git.kernel.org/cgit/utils/util-linux/util-linux.git/tree/include/list.h?id=v2.25-165-g9138d6f Linked lists will be used in channels implementation in the next patch.
-
Kirill Smelkov authored
- Add semaphore alloc/free/acquire/release functionality to libgolang runtime; - Implement semaphores for thread and gevent runtimes. * Thread runtime uses PyThread_acquire_lock/PyThread_release_lock + PyThread_acquire_lock/PyThread_release_lock, which, if used carefully, do not depend on GIL and on e.g. POSIX are tiny wrappers around sem_init(process-private) + sem_post/sem_wait(*). * Gevent runtime uses geven't Semaphore in Pyx mode. - Add Sema and Mutex classes that use semaphores provided by a runtime in a RAII style. - Add with_lock(mu) that mimics `with mu` in Python. Sema and Mutex will be used in channels implementation in the followup patch. (*) during late testing a bug was found in CPython2 and PyPy semaphore implementations on Darwin (technically speaking on POSIX with _POSIX_SEMAPHORES undefined). Quoting the patch: FIXME On Darwin, even though this is considered as POSIX, Python uses mutex+condition variable to implement its lock, and, as of 20190828, Py2.7 implementation, even though similar issue was fixed for Py3 in 2012, contains synchronization bug: the condition is signalled after mutex unlock while the correct protocol is to signal condition from under mutex: https://github.com/python/cpython/blob/v2.7.16-127-g0229b56d8c0/Python/thread_pthread.h#L486-L506 https://github.com/python/cpython/commit/187aa545165d (py3 fix) PyPy has the same bug for both pypy2 and pypy3: https://bitbucket.org/pypy/pypy/src/578667b3fef9/rpython/translator/c/src/thread_pthread.c#lines-443:465 https://bitbucket.org/pypy/pypy/src/5b42890d48c3/rpython/translator/c/src/thread_pthread.c#lines-443:465 This way when Pygolang is used with buggy Python/darwin, the bug leads to frequently appearing deadlocks, while e.g. CPython3/darwin works ok. -> TODO maintain our own semaphore code. So eventually we'll have push down and maintain our own semaphores, at least for platforms we care, not to be beaten by CPython runtime bugs.
-
Kirill Smelkov authored
Move `ch = py{send,recv}.__self__` to right after case __class__ check. For now both versions - old and new - can work, but when we'll move channel implementations to C the new version will be required to access pychan C-level attribute earlier than where ch is currently initialized.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
For clarity to denote that things work at Python level: - casev -> pycasev - case -> pycase - recv -> pyrecv - send -> pysend Channel object is still denoted as `ch` to reduce noise for when chan IO code will be move into libgolang. `ch` will be renamed to `pych` after that.
-
Kirill Smelkov authored
- put the logic to test-tweak what happens inside _blockforever into context manager pypanicWhenBlocked; - place this manager in pyx code, where it can later be changed to tweak _blockforever at C level.
-
Kirill Smelkov authored
For clarity to distinguish where an object is Python-level channel, or (later) a C-level channel.
-
Kirill Smelkov authored
Change `from golang import chan` to `from golang cimport pychan`; add type annotations where pychan is used. Using pychan at C level will be needed when test code will need to access C-level pychan attributes.
-
Kirill Smelkov authored
We will need to add C-level attributes to pychan and this requires it to become cdef class. The class is exported because at least _golang_test.pyx will also need to have access to those attributes. If we just do `class pychan` -> `cdef class pychan` e.g. the following starts to break: 1.venv/local/lib/python2.7/site-packages/py/_path/local.py:701: in pyimport __import__(modname) golang/__init__.py:174: in <module> from ._golang import \ golang/_golang.pyx:455: in init golang._golang _pychan_send = _pychan_send.__func__ E AttributeError: 'method_descriptor' object has no attribute '__func__' and golang/_golang.pyx:513: in golang._golang.pyselect if im_class(recv) is not pychan: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ f = <built-in method recv of golang._golang.pychan object at 0x7f7055e57cc8> def im_class(f): > return f.im_class E AttributeError: 'builtin_function_or_method' object has no attribute 'im_class' golang/_pycompat.py:28: AttributeError This is probably because for `cdef class` methods Cython does not emulate full method bindings the same way as Python does. Anyway we can check which method is passed to pyselect by chanop.__name__ or by inspecting PyCFunction directly. And not having method binding wrapper should only remove a bit of overhead. So we are ok with reworking send/recv chanop detection, and since this way im_class provided by golang._pycompat becomes unused, it is also removed. The timings are probably within noise: (on i7@2.6GHz) thread runtime: name old time/op new time/op delta go 21.7µs ± 1% 20.0µs ± 1% -7.60% (p=0.000 n=10+10) chan 9.91µs ± 4% 9.37µs ± 4% -5.39% (p=0.000 n=10+10) select 19.2µs ± 4% 20.2µs ± 4% +5.62% (p=0.001 n=9+8) def 58.0ns ± 0% 58.0ns ± 0% ~ (all equal) func_def 44.4µs ± 0% 43.8µs ± 1% -1.22% (p=0.000 n=10+10) call 63.0ns ± 0% 62.4ns ± 1% -0.95% (p=0.011 n=10+10) func_call 1.05µs ± 1% 1.06µs ± 1% ~ (p=0.059 n=10+10) try_finally 135ns ± 0% 136ns ± 0% +0.74% (p=0.000 n=10+9) defer 2.36µs ± 1% 2.28µs ± 1% -3.59% (p=0.000 n=10+10) workgroup_empty 49.0µs ± 1% 48.2µs ± 1% -1.63% (p=0.000 n=10+9) workgroup_raise 62.6µs ± 1% 58.9µs ± 1% -5.96% (p=0.000 n=10+10) gevent runtime: name old time/op new time/op delta go 21.7µs ± 1% 20.5µs ± 1% -5.33% (p=0.000 n=10+9) chan 9.91µs ± 4% 9.72µs ± 5% ~ (p=0.190 n=10+10) select 19.2µs ± 4% 19.5µs ±14% ~ (p=0.968 n=9+10) def 58.0ns ± 0% 58.0ns ± 0% ~ (all equal) func_def 44.4µs ± 0% 45.4µs ± 1% +2.23% (p=0.000 n=10+10) call 63.0ns ± 0% 64.0ns ± 0% +1.59% (p=0.000 n=10+10) func_call 1.05µs ± 1% 1.06µs ± 0% +0.65% (p=0.002 n=10+10) try_finally 135ns ± 0% 137ns ± 0% +1.48% (p=0.000 n=10+10) defer 2.36µs ± 1% 2.38µs ± 1% +0.72% (p=0.006 n=10+10) workgroup_empty 49.0µs ± 1% 48.2µs ± 1% -1.65% (p=0.000 n=10+10) workgroup_raise 62.6µs ± 1% 60.3µs ± 1% -3.69% (p=0.000 n=10+10)
-
Kirill Smelkov authored
For consistency to denote that this functions work at Python level: - len_sendq -> pylen_sendq - len_recvq -> pylen_recvq - waitBlocked -> pywaitBlocked
-
Kirill Smelkov authored
Plain test code movement with s/panic/pypanic/ as in golang.pyx panic is already there and means semantically different thing. We need this code to live in pyx world, because when channels implementation will be moved to C, this utilities will need to be adjusted in a way that is not possible to do from Python.
-
Kirill Smelkov authored
Denote what a method returns via `# -> ...` suffix.
-
Kirill Smelkov authored
Use `ch` instead of `self` for pychan methods. This aligns with how (py)chan objects are currently denoted in (py)select and will reduce the difference when chan code is moved into libgolang. We are sticking to Go convention here. After the code with channels implementation is moved into libgolang, the rest bits in golang.pyx will be changed to refer to pychan objects as pych for clarity.
-
Kirill Smelkov authored
To denote that this function/classes work at Python level: - chan -> pychan - select -> pyselect - default -> pydefault - nilchan -> pynilchan
-
Kirill Smelkov authored
Plain code movement with just s/panic/pypanic/ as in golang.pyx panic is aleady there and means semantically different thing. Moved code, even though it lives in golang.pyx, is still Python code and requires Python runtime and GIL. We'll be splitting channels implementation into nogil world in the following patches. Just plain movement to Cython brings the following speedup: (on i7@2.6GHz) thread runtime: name old time/op new time/op delta go 26.6µs ± 1% 21.7µs ± 1% -18.54% (p=0.000 n=10+10) chan 13.7µs ± 1% 9.9µs ± 4% -27.80% (p=0.000 n=10+10) select 29.3µs ± 2% 19.2µs ± 4% -34.65% (p=0.000 n=9+9) def 55.0ns ± 0% 58.0ns ± 0% +5.45% (p=0.000 n=10+10) func_def 44.0µs ± 1% 44.4µs ± 0% +0.72% (p=0.002 n=10+10) call 64.0ns ± 0% 63.0ns ± 0% -1.56% (p=0.002 n=8+10) func_call 1.09µs ± 1% 1.05µs ± 1% -2.96% (p=0.000 n=10+10) try_finally 139ns ± 2% 135ns ± 0% -2.60% (p=0.000 n=10+10) defer 2.36µs ± 1% 2.36µs ± 1% ~ (p=0.617 n=10+10) workgroup_empty 58.1µs ± 1% 49.0µs ± 1% -15.61% (p=0.000 n=10+10) workgroup_raise 72.7µs ± 1% 62.6µs ± 1% -13.88% (p=0.000 n=10+10) gevent runtime: name old time/op new time/op delta go 28.6µs ± 0% 25.4µs ± 0% -11.20% (p=0.000 n=8+9) chan 15.8µs ± 1% 12.2µs ± 1% -22.62% (p=0.000 n=10+10) select 33.1µs ± 1% 23.3µs ± 2% -29.60% (p=0.000 n=10+10) def 55.0ns ± 0% 56.0ns ± 0% +1.82% (p=0.000 n=10+10) func_def 44.4µs ± 2% 43.0µs ± 1% -3.06% (p=0.000 n=10+9) call 64.0ns ± 2% 69.0ns ± 0% +7.81% (p=0.000 n=10+10) func_call 1.06µs ± 0% 1.06µs ± 1% ~ (p=0.913 n=8+9) try_finally 136ns ± 0% 139ns ± 0% +2.21% (p=0.000 n=9+10) defer 2.29µs ± 1% 2.38µs ± 2% +3.58% (p=0.000 n=10+10) workgroup_empty 73.8µs ± 1% 70.5µs ± 1% -4.48% (p=0.000 n=10+10) workgroup_raise 94.1µs ± 0% 90.6µs ± 0% -3.69% (p=0.000 n=10+10)
-
Kirill Smelkov authored
- Add go functionality to libgolang runtime; - Implement go for thread and gevent runtimes. * Thread runtime uses PyThread_start_new_thread which, if used carefully, does not depend on Python GIL and on e.g. POSIX reduces to tiny wrapper around pthread_create. * Gevent runtime uses gevent's Greenlet in Pyx mode. This turns gevent to be a build-time dependency. - Provide low-level _taskgo in C client API; - Provide type-safe C++-level go wrapper over _taskgo; - Switch golang.go from py implementation into Pyx wrapper over Pyx/nogil API. This is the first patch that adds Pyx/C++/C-level unit tests and hooks them into golang_test.py . This patch brings the following speedup to Python-level interface: (on i7@2.6GHz) thread runtime: name old time/op new time/op delta go 93.0µs ± 1% 26.6µs ± 1% -71.41% (p=0.000 n=10+10) chan 13.6µs ± 2% 13.7µs ± 1% ~ (p=0.280 n=10+10) select 29.9µs ± 4% 29.3µs ± 2% -1.89% (p=0.017 n=10+9) def 61.0ns ± 0% 55.0ns ± 0% -9.84% (p=0.000 n=10+10) func_def 43.8µs ± 1% 44.0µs ± 1% +0.66% (p=0.006 n=10+10) call 62.5ns ± 1% 64.0ns ± 0% +2.40% (p=0.000 n=10+8) func_call 1.06µs ± 1% 1.09µs ± 1% +2.72% (p=0.000 n=10+10) try_finally 137ns ± 0% 139ns ± 2% +1.17% (p=0.033 n=10+10) defer 2.34µs ± 1% 2.36µs ± 1% +0.84% (p=0.015 n=10+10) workgroup_empty 96.1µs ± 1% 58.1µs ± 1% -39.55% (p=0.000 n=10+10) workgroup_raise 135µs ± 1% 73µs ± 1% -45.97% (p=0.000 n=10+10) gevent runtime: name old time/op new time/op delta go 68.8µs ± 1% 28.6µs ± 0% -58.47% (p=0.000 n=10+8) chan 14.8µs ± 1% 15.8µs ± 1% +6.19% (p=0.000 n=10+10) select 32.0µs ± 0% 33.1µs ± 1% +3.25% (p=0.000 n=10+10) def 58.0ns ± 0% 55.0ns ± 0% -5.17% (p=0.000 n=10+10) func_def 43.9µs ± 1% 44.4µs ± 2% +1.20% (p=0.007 n=10+10) call 63.5ns ± 1% 64.0ns ± 2% ~ (p=0.307 n=10+10) func_call 1.08µs ± 1% 1.06µs ± 0% -2.55% (p=0.000 n=10+8) try_finally 142ns ± 0% 136ns ± 0% -4.23% (p=0.000 n=10+9) defer 2.32µs ± 1% 2.29µs ± 1% -0.96% (p=0.000 n=10+10) workgroup_empty 90.3µs ± 0% 73.8µs ± 1% -18.29% (p=0.000 n=10+10) workgroup_raise 108µs ± 1% 94µs ± 0% -13.29% (p=0.000 n=10+10) (small changes are probably within noise; "go" and "workgroup_*" should be representative)
-