1. 21 May, 2024 1 commit
  2. 17 May, 2024 1 commit
  3. 09 May, 2024 4 commits
  4. 16 Apr, 2024 1 commit
  5. 22 Mar, 2024 6 commits
  6. 22 Feb, 2024 8 commits
  7. 18 Dec, 2023 4 commits
  8. 08 Nov, 2023 1 commit
    • Julien Muchembled's avatar
      master: fix crash when aborting early e.g. when failing to open listening socket · 9a3898e4
      Julien Muchembled authored
      Pre-mortem data:
      Traceback (most recent call last):
      File "neo/master/app.py", line 172, in run
      File "neo/master/app.py", line 180, in _run
      self.listening_conn = ListeningConnection(self, None, self.server)
      File "neo/lib/connection.py", line 298, in __init__
      File "neo/lib/connector.py", line 133, in makeListeningConnection
      self._error('listen', e)
      File "neo/lib/connector.py", line 93, in _error
      raise ConnectorException
      Traceback (most recent call last):
        File "neomaster", line 50, in <module>
        File "neo/scripts/neomaster.py", line 31, in main
        File "neo/master/app.py", line 175, in run
        File "neo/master/app.py", line 167, in log
          if self.pt is not None:
      AttributeError: 'Application' object has no attribute 'pt'
  9. 16 Oct, 2023 5 commits
    • Julien Muchembled's avatar
    • Julien Muchembled's avatar
    • Julien Muchembled's avatar
    • Julien Muchembled's avatar
      Bump protocol version · 0fc95175
      Julien Muchembled authored
    • Julien Muchembled's avatar
      Reimplement pack in a scalable way, partial pack & approval/reject of pack orders · 4c3b6c4d
      Julien Muchembled authored
      This is still pack without garbage collection, and without deleting
      any transaction metadata ('trans' table).
      Partial pack means that the client can take a list of oids: only these
      oids will be packed. No API is defined yet at IStorage level.
      Storage nodes pack in background, independently from other storage
      nodes, partition by partition, and calling IStorage.pack() returns
      immediately (though internally, NEO does have a mechanism to wait
      until it's done, which can be required for some ZODB unit tests).
      This new implementation also introduces the concept of signing pack
      orders. The idea is that calling IStorage.pack() only records a pack
      order in the database, that can be reviewed/approved/rejected using
      an UI that is left to be done. For the moment, pack orders are
      automatically approved (by the master).
      Internally, pack orders are stored as extra metadata of a transaction.
      IOW, IStorage.pack() implies the commit of an (empty) transaction.
      IStorage.pack() can be called without waiting for the previous one
      to be completed. Pack orders processed in the same order as they are
      - an unsigned pack order blocks the processing of any newer pack order;
      - rejected pack order are ignored.
      Approving a pack order also triggers pack on backup clusters.
      That's the simplest way to have everything consistent.
      Maybe later we could identify scenarios where it would be ok
      to unsign pack orders during asynchronous replication.
      The feature to check replicas is marked as experimental because it is
      not aware of differences that can happen during pack operations.
      About concurrency within the storage node, a first implementation
      extended what was done to delete partitions in background (see
      previous commit). But here, the job can't be easily split in splices
      that are never too big:
      - it's simpler to never split the processing of an oid but this can
        freeze the application for a long time when packing an oid that was
        modified many times (e.g. 30 min for an oid with 20 millions
        historical records);
      - then an attempt so that an oid can be processed in several times was
        inefficient, maybe due to a limit in RocksDB (packing the oid in the
        above example would take days during which NEO is significantly
      So background database jobs were moved to a separate thread, using a
      separate connection to the underlying database. This is obviously
      only useful for the MySQL backend. In order to share as much code as
      possible between backends, SQLite also does the work in a separate
      thread but sharing the main connection instead of opening a separate
      one (so such backend would not be suited in the above example).
      But deleting raw data with a secondary connection is not possible
      without fsyncing too often (or transaction isolation issues...): these
      deletions are deferred by recording them in a new table, which is
      processed later with the main connection. This is not so bad because
      the actual deletion of raw data is usually more efficient this way
      (more sequential IO).
      Here are a few numbers:
      - without load: 10h45 (12h for the first reimplementation)
      - with a load that normally takes 6h58:
        - load: 7h33 (so 8.4% slower)
        - pack: 15h36 (+4h51)
      As explained above, the pack of a partition is split in 2 steps:
      - the longest one (here 78% without load) should have negligible
        peformance impact on the application because the work is done in a
        separate thread with a secondary connection, and also with something
        to minimize GIL impact by prioritizing the main thread;
      - the shortest one (22%) to process the deferred deletions,
        with even lower priority than replication: it tries to split
        the work in tasks that take ~10ms.
  10. 11 Oct, 2023 1 commit
  11. 04 Apr, 2023 8 commits