1. 18 Jun, 2021 2 commits
    • Kirill Smelkov's avatar
      tests: Teach test driver to pass testWendelinCore when run with wendelin.core 2 · 89e6c2b4
      Kirill Smelkov authored
      This is follow-up to 5796a17a (core_test: Add test to make sure that
      wendelin.core basically works; !1429).
      
      In that commit it was said that testWendelinCore
      
          "currently passes with wendelin.core 1, which is the default.
           It also passes as live test with wendelin.core 2.
           However with wendelin.core 2 it currently fails when run on testnodes
           ...
           because we need to amend ERP5 test driver
      
           1. to run tests on a real storage instead of in-RAM Mapping Storage(*), and
           2. to spawn WCFS server for each such storage."
      
      This patch addresses that latter problem to run testWendelinCore under
      testnode infrastructure.
      
      @rafael and @jerome suggested that we can force a test to be run on a
      real storage via `runUnitTest --load --save` or via `--activity_node=n`.
      
      @rafael also suggested not to generally change the testing driver, but instead
      make step-by-step progress and first tag each test that uses wendelin.core with an
      option. Let's go this way now: runUnitTest/custom_zodb are taught to launch
      WCFS server if wendelin.core usage is requested and software is built with
      wendelin.core 2.
      
      With both changes combined testWendelinCore should now pass OK when run
      on a testnode with both wendelin.core 1 and wendelin.core 2.
      
      This patch is based on a draft patch by @rafael: rafael/erp5@14e3a777.
      
      This patch also relies on recent wendelin.core 2 wcfs.py rework which
      exposed functionality to start WCFS server and to further control it:
      kirr/wendelin.core@5bfa8cf8.
      
      /cc @tomo, @romain, @jerome, @seb
      89e6c2b4
    • Kirill Smelkov's avatar
      runUnitTest: Close ZODB DB before shutdown · 0f65560f
      Kirill Smelkov authored
      We are already closing Storage on shutdown (see nearby Storage.close call in
      the patch), but the DB handle was not closed. With classic ZODB it does not
      really matter in practice, because not closing DB is only leaking RAM resources
      and the program is anyway terminated soon.
      
      However with wendelin.core 2 things are different: in addition to ZODB storage
      server, there is also synthetic WCFS filesystem from which files are opened
      and memory-mmaped. In runUnitTest we start both ZODB and WCFS servers and we
      also shut down them both in the end. The filesystem server can be cleanly
      unmounted and shutdown only when there are no opened files left on it.
      
      Wendelin.core 2 client works by complementing each ZODB connection (zconn) with
      WCFS-level connection (wconn) to WCFS server. Those two zconn and wconn are
      kept in sync by wendelin.core 2 client logic: whenever zconn adjusts its view
      of the database, so does wconn. And whenever zconn is garbage collected, so
      does wconn is closed to release resources and close corresponding files opened
      on WCFS. In addition to garbage-collection, wconn is also closed when zconn.db
      - the ZODB DB handle via which zconn was created - is closed. This is needed to
      be able to reliably trigger freeing WCFS resources, because even after DB is
      closed, zconn can still stay alive forever being referenced from some python
      object - e.g. a frame or traceback or something else.
      
      The latter scenario actually happens during runUnitTest run. As the result it
      leads to inability to unmount and stop WCFS server cleanly:
      
          $ ./runUnitTest --load --save --with_wendelin_core -v erp5_core_test:testWendelinCore
          ...
          test (erp5.component.test.erp5_version.testWendelinCore.TestWendelinCoreBasic) ... ok
      
          ----------------------------------------------------------------------
          Ran 1 test in 0.105s
      
          OK
          F0618 19:05:46.359140   35468 wcfs/client/wcfs.cpp:486] CRITICAL: pinner: wcfs /dev/shm/wcfs/b53b61099c740b452b383db6df6dce4ad6d23ba2: wlink25: recvReq: link is down
          F0618 19:05:46.359173   35468 wcfs/client/wcfs.cpp:487] CRITICAL: wcfs server will likely kill us soon.
          CRITICAL: pinner: wcfs /dev/shm/wcfs/b53b61099c740b452b383db6df6dce4ad6d23ba2: wlink25: recvReq: link is down
          CRITICAL: wcfs server will likely kill us soon.
          Traceback (most recent call last):
            File ".../bin/runUnitTest", line 312, in <module>
              sys.exit(runUnitTest.main())
            File ".../parts/erp5/Products/ERP5Type/tests/runUnitTest.py", line 926, in main
              run_only=run_only,
            File ".../parts/erp5/Products/ERP5Type/tests/runUnitTest.py", line 709, in runUnitTestList
              wcfs_server.stop()
            ...
            File ".../parts/wendelin.core/wcfs/__init__.py", line 543, in _fuse_unmount
              raise RuntimeError("%s\n(more details logged)" % emsg)
          RuntimeError: fuse_unmount /dev/shm/wcfs/b53b61099c740b452b383db6df6dce4ad6d23ba2: failed: fusermount: failed to unmount /dev/shm/wcfs/b53b61099c740b452b383db6df6dce4ad6d23ba2: Device or resource busy
          (more details logged)
      
          # logs
          2021-06-18 19:05:45.978 INFO root wcfs: unmount/stop wcfs pid32981 @ /dev/shm/wcfs/b53b61099c740b452b383db6df6dce4ad6d23ba2
          2021-06-18 19:05:46.068 WARNING root fuse_unmount /dev/shm/wcfs/b53b61099c740b452b383db6df6dce4ad6d23ba2: failed: fusermount: failed to unmount /dev/shm/wcfs/b53b61099c740b452b383db6df6dce4ad6d23ba2: Device or resource busy
          2021-06-18 19:05:46.068 WARNING root # lsof /dev/shm/wcfs/b53b61099c740b452b383db6df6dce4ad6d23ba2
          2021-06-18 19:05:46.357 WARNING root COMMAND     PID       USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
          runUnitTe 32175 slapuser34   24r   REG   0,48      111    4 /dev/shm/wcfs/b53b61099c740b452b383db6df6dce4ad6d23ba2/.wcfs/zurl
          runUnitTe 32175 slapuser34   25u   REG   0,48        0    7 /dev/shm/wcfs/b53b61099c740b452b383db6df6dce4ad6d23ba2/head/watch
          runUnitTe 32175 slapuser34   26r   REG   0,48  2097152    9 /dev/shm/wcfs/b53b61099c740b452b383db6df6dce4ad6d23ba2/head/bigfile/00000000000078b4
      
          2021-06-18 19:05:46.358 WARNING root -> kill -TERM wcfs.go ...
          2021-06-18 19:05:46.358 WARNING root -> abort FUSE connection ...
      
      I've debugged things a bit and even with
      kirr/ZODB@bbd03b3a ZODB connection stays alive
      being referenced from some frame objects.
      
      -> Fix this problem by explicitly closing ZODB DB nearby tests shutdown before call to wcfs_server.stop
      0f65560f
  2. 03 Jun, 2021 1 commit
    • Kirill Smelkov's avatar
      core_test: Add test to make sure that wendelin.core basically works · 49fc0de2
      Kirill Smelkov authored
      Wendelin.core is now integral part of ERP5 (see [1,2]), but nothing
      inside ERP5 currently uses it. And even though wendelin.core has its own
      testsuite, integration problems are always possible.
      
      -> Add test to erp5_core_test that minimally makes sure that basic
      wendelin.core operations work.
      
      This test currently passes with wendelin.core 1, which is the default.
      It also passes as live test with wendelin.core 2.
      However with wendelin.core 2 it currently fails on testnodes like e.g.
      
          ValueError: ZODB.MappingStorage.MappingStorage is in-RAM storage
      	in-RAM storages are not supported:
      	a zurl pointing to in-RAM storage in one process would lead to
      	another in-RAM storage in WCFS process.
      
      and
      
          RuntimeError: wcfs: join file:///srv/slapgrid/slappart8/srv/testnode/djk/test_suite/unit_test.2/var/Data.fs: server not started
          (https://nexedijs.erp5.net/#/test_result_module/20210530-92EF3124/102)
      
      because we need to amend ERP5 test driver
      
      1) to run tests on a real storage instead of in-RAM Mapping Storage(*), and
      2) to spawn WCFS server for each such storage.
      
      I will try to address those points in a later patch.
      
      In the meantime there should be no reason not to merge this, because we
      do not use wendelin.core 2 yet, and solving "1" and "2" first are
      preconditions to begin such a usage.
      
      /cc @rafael, @tomo, @seb, @jerome, @romain, @vpelletier, @Tyagov, @klaus, @jp
      
      (*) Combining Zope and WCFS working together requires data to be on a real
          storage, not on in-RAM MappingStorage inside Zope's Python process.
      
      [1] slapos@7f877621
      [2] slapos!874 (comment 122339)
      49fc0de2
  3. 01 Mar, 2021 1 commit
    • Jérome Perrin's avatar
      accounting: fix grouping option of GL when running in deferred mode · bcbf71e8
      Jérome Perrin authored
      This omit_grouping_reference key was not set as selection parameters, so
      Node_getAccountingTransactionList could not find it in selection when running
      in deferred mode.
      
      In non deferred mode, it was working, because selection is set with values from
      request and it was same request, but deferred style uses activities, so it's
      different requests.
      bcbf71e8
  4. 05 Feb, 2021 5 commits
  5. 29 Jan, 2021 31 commits