1. 16 Apr, 2024 1 commit
    • Oleksandr Byelkin's avatar
      MDEV-33861 main.query_cache fails with embedded after enabling WITH_PROTECT_STATEMENT_MEMROOT · 50998a6c
      Oleksandr Byelkin authored
      Synopsis: If SELECT returned answer from Query Cache it is not really executed.
      
      The reason for firing of assertion
        DBUG_ASSERT((mem_root->flags & ROOT_FLAG_READ_ONLY) == 0);
      is that in case the query_cache is on and the same query run by different
      stored routines the following use case can take place:
      First, lets say that bodies of routines used by the test case are the same
      and contains the only query 'SELECT * FROM t1';
        call p1() -- a result set is stored in query cache for further use.
        call p2() -- the same query is run against the table t1, that result in
                     not running the actual query but using its cached result.
                     On finishing execution of this routine, its memory root is
                     marked for read only since every SP instruction that this
                     routine contains has been executed.
        INSERT INT t1 VALUE (1); -- force following invalidation of query cache
        call p2() -- query the table t1 will result in assertion failure since its
                     execution would require allocation on the memory root that
                     has been already marked as read only memory root
      
      The root cause of firing the assertion is that memory root of the stored
      routine 'p2' was marked as read only although actual execution of the query
      contained inside hadn't been performed.
      
      To fix the issue, mark a SP instruction as not yet run in case its execution
      doesn't result in real query processing and a result set got from query cache
      instead.
      
      Note that, this issue relates server built in debug mode AND with the protect
      statement memory root feature turned on. It doesn't affect server built
      in release mode.
      50998a6c
  2. 14 Apr, 2024 3 commits
  3. 12 Apr, 2024 1 commit
    • Vlad Lesin's avatar
      MDEV-33802 Weird read view after ROLLBACK of other transactions. · d7fc975c
      Vlad Lesin authored
      In the case if some unique key fields are nullable, there can be
      several records with the same key fields in unique index with at least
      one key field equal to NULL, as NULL != NULL.
      
      When transaction is resumed after waiting on the record with at least one
      key field equal to NULL, and stored in persistent cursor record is
      deleted, persistent cursor can be restored to the record with all key
      fields equal to the stored ones, but with at least one field equal to
      NULL. And such record is wrongly treated as a record with the same unique
      key as stored in persistent cursor record one, what is wrong as
      NULL != NULL.
      
      The fix is to check if at least one unique field is NULL in restored
      persistent cursor position, and, if so, then don't treat the record as
      one with the same unique key as in the stored record key.
      
      dict_index_t::nulls_equal was removed, as it was initially developed for
      never existed in MariaDB "intrinsic tables", and there is no code, which
      would set it to "true".
      
      Reviewed by Marko Mäkelä.
      d7fc975c
  4. 10 Apr, 2024 2 commits
    • Marko Mäkelä's avatar
      MDEV-33512 Corrupted table after IMPORT TABLESPACE and restart · d8249775
      Marko Mäkelä authored
      In commit d74d9596 (MDEV-18543)
      there was an error that would cause the hidden metadata record
      to be deleted, and therefore cause the table to appear corrupted
      when it is reloaded into the data dictionary cache.
      
      PageConverter::update_records(): Do not delete the metadata record,
      but do validate it.
      
      RecIterator::open(): Make the API more similar to 10.6, to simplify
      merges.
      d8249775
    • Yuchen Pei's avatar
      MDEV-33661 MENT-1591 Keep spider in memory until exit in ASAN builds · 662bb176
      Yuchen Pei authored
      Same as MDEV-29579. For some reason, libodbc does not clean up
      properly if unloaded too early with the dlclose() of spider. So we add
      UNIQUE symbols to spider so the spider does not reload in dlclose().
      
      This change, however, uncovers some hidden problems in the spider
      codebase, for which we move the initialisation of some spider global
      variables into the initialisation of spider itself.
      
      Spider has some global variables. Their initialisation should be done
      in the initialisation of spider itself, otherwise, if spider were
      re-initialised without these symbol being unloaded, the values could
      be inconsistent and causing issues.
      
      One such issue is caused by the variables
      spider_mon_table_cache_version and spider_mon_table_cache_version_req.
      They are used for resetting the spider monitoring table cache and have
      initial values of 0 and 1 respectively. We have that always
      spider_mon_table_cache_version_req >= spider_mon_table_cache_version,
      and when the relation is strict, the cache is reset,
      spider_mon_table_cache_version is brought to be equal to
      spider_mon_table_cache_version_req, and the cache is searched for
      matching table_name, db_name and link_idx. If the relation is equal,
      no reset would happen and the cache would be searched directly.
      
      When spider is re-inited without resetting the values of
      spider_mon_table_cache_version and spider_mon_table_cache_version_req
      that were set to be equal in the previous cache reset action, the
      cache was emptied in the previous spider deinit, which would result in
      HA_ERR_KEY_NOT_FOUND unexpectedly.
      
      An alternative way to fix this issue would be to call the spider udf
      spider_flush_mon_cache_table(), which increments
      spider_mon_table_cache_version_req thus making sure the inequality is
      strict. However, there's no reason for spider to initialise these
      global variables on dlopen(), rather than on spider init, which is
      cleaner and "purer".
      
      To reproduce this issue, simply revert the changes involving the two
      variables and then run:
      
      mtr --no-reorder spider.ha{,_part}
      662bb176
  5. 08 Apr, 2024 8 commits
  6. 05 Apr, 2024 1 commit
  7. 04 Apr, 2024 1 commit
    • Sergei Petrunia's avatar
      MDEV-21102: Server crashes in JOIN_CACHE::write_record_data upon EXPLAIN with subqueries · 8cc36fb7
      Sergei Petrunia authored
      JOIN_CACHE has a light-weight initialization mode that's targeted at
      EXPLAINs. In that mode, JOIN_CACHE objects are not able to execute.
      
      Light-weight mode was used whenever the statement was an EXPLAIN. However
      the EXPLAIN can execute subqueries, provided they enumerate less than
      @@expensive_subquery_limit rows.
      
      Make sure we use light-weight initialization mode only when the select is
      more expensive @@expensive_subquery_limit.
      
      Also add an assert into JOIN_CACHE::put_record() which prevents its use
      if it was initialized for EXPLAIN only.
      8cc36fb7
  8. 28 Mar, 2024 2 commits
  9. 27 Mar, 2024 6 commits
  10. 26 Mar, 2024 6 commits
  11. 25 Mar, 2024 1 commit
  12. 21 Mar, 2024 2 commits
  13. 19 Mar, 2024 4 commits
  14. 18 Mar, 2024 2 commits