1. 12 Dec, 2021 1 commit
    • Thirunarayanan Balathandayuthapani's avatar
      MDEV-27014 InnoDB fails to restore page 0 from the doublewrite buffer · be5990d0
      Thirunarayanan Balathandayuthapani authored
        This patch reverts the commit cab8f4b5.
      InnoDB fails to restore page0 from doublewrite buffer when the
      tablespace is being deferred. In that case, InnoDB doesn't find
      INIT_PAGE redo log record for page0 and it leads to failure.
      InnoDB should recovery page0 from doublewrite buffer for the
      deferred tablespace before applying the redo log records.
      
      Added deferred_dblwr() to restore page0 of deferred tablespace
      from doublewrite buffer
      be5990d0
  2. 10 Dec, 2021 1 commit
  3. 07 Dec, 2021 8 commits
  4. 06 Dec, 2021 4 commits
  5. 04 Dec, 2021 2 commits
  6. 03 Dec, 2021 6 commits
  7. 02 Dec, 2021 1 commit
  8. 01 Dec, 2021 3 commits
    • Marko Mäkelä's avatar
      MDEV-12353 fixup: Correct a comment · f9726ced
      Marko Mäkelä authored
      Several EXTENDED type records have already been implemented.
      f9726ced
    • Thirunarayanan Balathandayuthapani's avatar
      MDEV-27014 InnoDB fails to restore page 0 from the doublewrite buffer · e0e24b18
      Thirunarayanan Balathandayuthapani authored
      - Replaced the pointer parameter of validate_for_recovery() with uint32_t
      e0e24b18
    • Jan Lindström's avatar
      Fix bad galera tests · d7b37de9
      Jan Lindström authored
      * galera_kill_applier : we should make sure that node has
        correct number of wsrep appliers
      * galera_bad_wsrep_new_cluster: This test restarts both nodes,
      so it is bad on mtr. Make sure it is run alone
      * galera_update_limit : Make sure we have PK when needed
      galera_as_slave_replay : bf abort was not consistent
      * galera_unicode_pk : Add wait_conditions so that all nodes
      are part of cluster and DDL and INSERT has replicated before
      any further operations are done.
      * galera_bf_abort_at_after_statement : Add wait_conditions
      to make sure all nodes are part of cluster and that DDL
      and INSERT has replicated. Make sure we reset DEBUG_SYNC.
      d7b37de9
  9. 30 Nov, 2021 3 commits
    • Martin Beck's avatar
      MDEV-27088: Server crash on ARM (WMM architecture) due to missing barriers in lf-hash (10.5) · 4ebac0fc
      Martin Beck authored
      MariaDB server crashes on ARM (weak memory model architecture) while
      concurrently executing l_find to load node->key and add_to_purgatory
      to store node->key = NULL. l_find then uses key (which is NULL), to
      pass it to a comparison function.
      
      The specific problem is the out-of-order execution that happens on a
      weak memory model architecture. Two essential reorderings are possible,
      which need to be prevented.
      
      a) As l_find has no barriers in place between the optimistic read of
      the key field lf_hash.cc#L117 and the verification of link lf_hash.cc#L124,
      the processor can reorder the load to happen after the while-loop.
      
      In that case, a concurrent thread executing add_to_purgatory on the same
      node can be scheduled to store NULL at the key field lf_alloc-pin.c#L253
      before key is loaded in l_find.
      
      b) A node is marked as deleted by a CAS in l_delete lf_hash.cc#L247 and
      taken off the list with an upfollowing CAS lf_hash.cc#L252. Only if both
      CAS succeed, the key field is written to by add_to_purgatory. However,
      due to a missing barrier, the relaxed store of key lf_alloc-pin.c#L253
      can be moved ahead of the two CAS operations, which makes the value of
      the local purgatory list stored by add_to_purgatory visible to all threads
      operating on the list. As the node is not marked as deleted yet, the
      same error occurs in l_find.
      
      This change three accesses to be atomic.
      
      * optimistic read of key in l_find lf_hash.cc#L117
      * read of link for verification lf_hash.cc#L124
      * write of key in add_to_purgatory lf_alloc-pin.c#L253
      
      Reviewers: Sergei Vojtovich, Sergei Golubchik
      
      Fixes: MDEV-23510 / d30c1331a18d875e553f3fcf544997e4f33fb943
      4ebac0fc
    • Martin Beck's avatar
      MDEV-27088: lf unit tests - cycles insufficient · 17802165
      Martin Beck authored
      Per bug report, cycles was woefully insufficient to
      detect any implementation error.
      17802165
    • Martin Beck's avatar
      MDEV-27088: Server crash on ARM (WMM architecture) due to missing barriers in lf-hash · 4e0dcf10
      Martin Beck authored
      MariaDB server crashes on ARM (weak memory model architecture) while
      concurrently executing l_find to load node->key and add_to_purgatory
      to store node->key = NULL. l_find then uses key (which is NULL), to
      pass it to a comparison function.
      
      The specific problem is the out-of-order execution that happens on a
      weak memory model architecture. Two essential reorderings are possible,
      which need to be prevented.
      
      a) As l_find has no barriers in place between the optimistic read of
      the key field lf_hash.cc#L117 and the verification of link lf_hash.cc#L124,
      the processor can reorder the load to happen after the while-loop.
      
      In that case, a concurrent thread executing add_to_purgatory on the same
      node can be scheduled to store NULL at the key field lf_alloc-pin.c#L253
      before key is loaded in l_find.
      
      b) A node is marked as deleted by a CAS in l_delete lf_hash.cc#L247 and
      taken off the list with an upfollowing CAS lf_hash.cc#L252. Only if both
      CAS succeed, the key field is written to by add_to_purgatory. However,
      due to a missing barrier, the relaxed store of key lf_alloc-pin.c#L253
      can be moved ahead of the two CAS operations, which makes the value of
      the local purgatory list stored by add_to_purgatory visible to all threads
      operating on the list. As the node is not marked as deleted yet, the
      same error occurs in l_find.
      
      This change three accesses to be atomic.
      
      * optimistic read of key in l_find lf_hash.cc#L117
      * read of link for verification lf_hash.cc#L124
      * write of key in add_to_purgatory lf_alloc-pin.c#L253
      
      Reviewers: Sergei Vojtovich, Sergei Golubchik
      
      Fixes: MDEV-23510 / d30c1331a18d875e553f3fcf544997e4f33fb943
      4e0dcf10
  10. 29 Nov, 2021 8 commits
  11. 28 Nov, 2021 2 commits
  12. 26 Nov, 2021 1 commit
    • Igor Babaev's avatar
      MDEV-26553 NOT IN subquery construct crashing 10.1 and up · ac963142
      Igor Babaev authored
      This bug was introduced by commit be00e279
      The commit was applied for the task MDEV-6480 that allowed to remove top
      level disjuncts from WHERE conditions if the range optimizer evaluated them
      as always equal to FALSE/NULL.
      If such disjuncts are removed the WHERE condition may become an AND formula
      and if this formula contains multiple equalities the field JOIN::item_equal
      must be updated to refer to these equalities. The above mentioned commit
      forgot to do this and it could cause crashes for some queries.
      
      Approved by Oleksandr Byelkin <sanja@mariadb.com>
      ac963142