- 08 Oct, 2020 1 commit
-
-
Kirill Smelkov authored
Here is e.g. corresponding summary: ===== 12 failed, 14 passed, 4 xfailed, 12 error in 19.58 seconds =====
-
- 07 Oct, 2020 1 commit
-
-
Kirill Smelkov authored
When checking logs on testnode, it is very useful to know which kernel version is running inside (wendelin.core uses FUSE and depends on having patched FUSE module for kernels < 5.2), which CPU is there, etc. Format of printed output follows go benchmarking format and coincides with start of `neotest info-local` (see footers on figures in http://navytux.spb.ru/~kirr/neo.html and here: https://lab.nexedi.com/kirr/neo/blob/1f26678b/go/neo/t/neotest#L585-874)
-
- 05 Oct, 2020 5 commits
-
-
Kirill Smelkov authored
- To list which tests are in there, - To execute only selected tests. Apply only to local mode. Very handy for debugging.
-
Kirill Smelkov authored
We started to print test result summary line for a testcase run since 0153635b (Teach nxdtest to run tests locally). However it is not present in log when nxdtest is run under master. -> Include summary lines everywhere for uniformity, with reason similar to bd1333bb (log += title and argv for ran testcase).
-
Kirill Smelkov authored
This reverts commit 34e96b1d. Reason for revert is that since bd1333bb (log += title and argv for ran testcase) we always emit details of to-be-run command to log in the beginning of testcase run.
-
Kirill Smelkov authored
We started to display command and envadj in the log in the previous patch. However only command - without envadj - was reported to master for test result. -> Make it uniform: include envadj into details everywhere.
-
Kirill Smelkov authored
When there are several or many testcases, it is hard to understand - by seeing just log or console output - which part of the test suite was running. It also helps to see the exact command that was spawned. Example output for pygolang. Before: (neo) (z-dev) (g.env) kirr@deco:~/src/tools/go/pygolang$ nxdtest ============================= test session starts ============================== platform linux2 -- Python 2.7.18, pytest-4.6.11, py-1.9.0, pluggy-0.13.1 rootdir: /home/kirr/src/tools/go/pygolang collected 112 items golang/_gopath_test.py .. [ 1%] golang/context_test.py .. [ 3%] golang/cxx_test.py .. [ 5%] golang/errors_test.py ........ [ 12%] golang/fmt_test.py ... [ 15%] golang/golang_test.py ................................................ [ 58%] golang/io_test.py . [ 58%] golang/strconv_test.py .. [ 60%] golang/strings_test.py ..... [ 65%] golang/sync_test.py ............. [ 76%] golang/time_test.py ........ [ 83%] golang/pyx/build_test.py ... [ 86%] golang/pyx/runtime_test.py . [ 87%] gpython/gpython_test.py ssssss.sssssss [100%] ==================== 99 passed, 13 skipped in 5.42 seconds ===================== ok thread 5.656s # 112t 0e 0f 13s ============================= test session starts ============================== platform linux2 -- Python 2.7.18, pytest-4.6.11, py-1.9.0, pluggy-0.13.1 rootdir: /home/kirr/src/tools/go/pygolang collected 112 items golang/_gopath_test.py .. [ 1%] golang/context_test.py .. [ 3%] golang/cxx_test.py .. [ 5%] golang/errors_test.py ........ [ 12%] golang/fmt_test.py ... [ 15%] golang/golang_test.py ................................................ [ 58%] golang/io_test.py . [ 58%] golang/strconv_test.py .. [ 60%] golang/strings_test.py ..... [ 65%] golang/sync_test.py ............. [ 76%] golang/time_test.py ........ [ 83%] golang/pyx/build_test.py ... [ 86%] golang/pyx/runtime_test.py . [ 87%] gpython/gpython_test.py .............. [100%] ========================= 112 passed in 17.35 seconds ========================== ok gevent 17.768s # 112t 0e 0f 0s After: (neo) (z-dev) (g.env) kirr@deco:~/src/tools/go/pygolang$ nxdtest >>> thread $ python -m pytest ============================= test session starts ============================== platform linux2 -- Python 2.7.18, pytest-4.6.11, py-1.9.0, pluggy-0.13.1 rootdir: /home/kirr/src/tools/go/pygolang collected 112 items golang/_gopath_test.py .. [ 1%] golang/context_test.py .. [ 3%] golang/cxx_test.py .. [ 5%] golang/errors_test.py ........ [ 12%] golang/fmt_test.py ... [ 15%] golang/golang_test.py ................................................ [ 58%] golang/io_test.py . [ 58%] golang/strconv_test.py .. [ 60%] golang/strings_test.py ..... [ 65%] golang/sync_test.py ............. [ 76%] golang/time_test.py ........ [ 83%] golang/pyx/build_test.py ... [ 86%] golang/pyx/runtime_test.py . [ 87%] gpython/gpython_test.py ssssss.sssssss [100%] ==================== 99 passed, 13 skipped in 5.27 seconds ===================== ok thread 5.508s # 112t 0e 0f 13s >>> gevent $ gpython -m pytest ============================= test session starts ============================== platform linux2 -- Python 2.7.18, pytest-4.6.11, py-1.9.0, pluggy-0.13.1 rootdir: /home/kirr/src/tools/go/pygolang collected 112 items golang/_gopath_test.py .. [ 1%] golang/context_test.py .. [ 3%] golang/cxx_test.py .. [ 5%] golang/errors_test.py ........ [ 12%] golang/fmt_test.py ... [ 15%] golang/golang_test.py ................................................ [ 58%] golang/io_test.py . [ 58%] golang/strconv_test.py .. [ 60%] golang/strings_test.py ..... [ 65%] golang/sync_test.py ............. [ 76%] golang/time_test.py ........ [ 83%] golang/pyx/build_test.py ... [ 86%] golang/pyx/runtime_test.py . [ 87%] gpython/gpython_test.py .............. [100%] ========================= 112 passed in 17.32 seconds ========================== ok gevent 17.729s # 112t 0e 0f 0s
-
- 01 Oct, 2020 1 commit
-
-
Kirill Smelkov authored
failure is assertion failure. error is any other error. Source: https://docs.python.org/3/library/unittest.html#organizing-test-code -> "If the test fails, an exception will be raised ... identify the test case as a failure. Any other exceptions will be treated as errors" While searching ERP5 code for failure_count to clarify its meaning, I've also discovered expected_failures and unexpected_successes (*) so they are also added to the table. (*) see e.g. https://lab.nexedi.com/nexedi/erp5/blob/1e31c091/product/ERP5Type/tests/ERP5TypeTestSuite.py#L6-8
-
- 29 Sep, 2020 9 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
Instead of just traceback saying that e.g. "No such command" without information which command it was tried to spawn, stderr will now include what was tried to be run. This makes diagnostic easier.
-
Kirill Smelkov authored
Many project that use pytest for testing won't need to redefine it over and over again in their .nxdtest file.
-
Kirill Smelkov authored
It is very handy to debug both .nxdtest and nxdtest tool itself, and to be able to run tests locally during development instead of (in addition to) waiting for the test result from test nodes. -> Teach nxdtest to run and report tests locally if no --master_url was provided. Idea is taken from runTestSuite.in from NEO, Cython+, Buildout and Rina: https://lab.nexedi.com/nexedi/slapos/blob/3b14e028/software/neoppod/runTestSuite.in#L95-105 https://lab.nexedi.com/nexedi/slapos/blob/3b14e028/software/neoppod/runTestSuite.in#L55-75 https://lab.nexedi.com/nexedi/slapos/blob/3b14e028/software/neoppod/stress-testing/runTestSuite.in#L17-37 https://lab.nexedi.com/nexedi/slapos/blob/3b14e028/software/cython-test/runTestSuite.in#L38-58 https://lab.nexedi.com/nexedi/slapos/blob/3b14e028/software/buildout-testing/runTestSuite.in#L21-39 https://lab.nexedi.com/nexedi/slapos/blob/3b14e028/software/build-rina/runTestSuite.in#L29-49 Instead of copying code over and over again, let's stick to UNIX approach and make it to be in one tool only that does its job (hopefully) good.
-
Kirill Smelkov authored
Previously, if --master_url was not provided, the program would crash with UnboundLocalError for status, since for that case list of tests was empty. Fix it by moving status initialization higher by one level. See next patch where we'll add meaning for nxdtest run with no --master_url set.
-
Kirill Smelkov authored
It is frequently needed to set some environment variable without completely resetting everything else in the environment, because that everything else can carry settings like PATH, PYTHONPATH, RPATH, etc for the program to be able to run at all. See pygolang@b938af8b for recent example.
-
Kirill Smelkov authored
After previous patch that summary function is no-longer used, is specific to NEO/py and so should be defined in .nxdtest file in that project. lab.nexedi.com/kirr/neo will be adjusted correspondingly.
-
Kirill Smelkov authored
Make the tool generic: load list of testcases to run from .nxdtest file and process the correspondingly. See added top-level documentation for details.
-
Kirill Smelkov authored
Amends e9cc7e07 (Add minimal packaging setup)
-
- 28 Sep, 2020 6 commits
-
-
Kirill Smelkov authored
So that nxdtest could be installed as egg via pip or buildout.
-
Kirill Smelkov authored
We will use it as the base for nxdtest. Original history: neo@51b18490 neo@f67c147d neo@0e28fbb8
-
Kirill Smelkov authored
-
Kirill Smelkov authored
We send output from tested process to master. We also print it to stdout,stderr so it appears in testnode logs. However till now it was like, whole output first read, and only then emitted to log as a whole, thus not allowing to oversee current test progress by watching testnode log tail. Fix it by implementing the teeing process manually. Some draft history related to this patch: lab.nexedi.com/kirr/neo/commit/aa370ca3 fixup! X neotest/runTestSuite: Tee tested process stdout,stderr to testnode logs incrementally lab.nexedi.com/kirr/neo/commit/096550b1 fixup! X neotest/runTestSuite: Tee tested process stdout,stderr to testnode logs incrementally lab.nexedi.com/kirr/neo/commit/63956f43 fixup! X neotest/runTestSuite: Tee tested process stdout,stderr to testnode logs incrementally lab.nexedi.com/kirr/neo/commit/b9819d0e X neotest/runTestSuite: Tee tested process stdout,stderr to testnode logs incrementally
-
Kirill Smelkov authored
The wrapper runs `neotest bench-local` in neotest SlapOS instance: https://lab.nexedi.com/nexedi/slapos/blob/ab8705f4/software/neotest/instance.cfg.in Extracted from slapos!282
-
Kirill Smelkov authored
Nxdtest is conceived as a tox-like tool to run tests for a project under Nexedi testing infrastructure.
-