1. 20 Jan, 2023 12 commits
    • Sergei Golubchik's avatar
      MDEV-29199 Unique hash key is ignored upon INSERT ... SELECT into non-empty MyISAM table · fc292f42
      Sergei Golubchik authored
      disable bulk insert optimization if long uniques are used, because they
      need to read the table (index_read) after every inserted now. And bulk
      insert optimization might disable indexes.
      
      bulk insert is already disabled in other cases when there are chances
      that the table will be read duing the bulk insert.
      fc292f42
    • Sergei Golubchik's avatar
      MDEV-27631 Assertion `global_status_var.global_memory_used == 0' failed in mysqld_exit · db50919f
      Sergei Golubchik authored
      plugin_vars_free_values() was walking plugin sysvars and thus
      did not free memory of plugin PLUGIN_VAR_NOSYSVAR vars.
      
      * change it to walk all plugin vars
      * add the pluginname_ prefix to NOSYSVARS var names too,
        so that plugin_vars_free_values() would be able to find their
        bookmarks
      db50919f
    • Jan Lindström's avatar
    • Oleg Smirnov's avatar
      MDEV-29294 Assertion `functype() == ((Item_cond *) new_item)->functype()'... · b2b9d916
      Oleg Smirnov authored
      MDEV-29294 Assertion `functype() == ((Item_cond *) new_item)->functype()' failed in Item_cond::remove_eq_conds on SELECT
      
      Item_singlerow_subselect may be converted to Item_cond during
      optimization. So there is a possibility of constructing nested
      Item_cond_and or Item_cond_or which is not allowed (such
      conditions must be flattened).
      This commit checks if such kind of optimization has been applied
      and flattens the condition if needed
      b2b9d916
    • Alexander Barkov's avatar
      MDEV-27653 long uniques don't work with unicode collations · eea9f2a1
      Alexander Barkov authored
      There are no source code changes in this commit!
      This is an empty follow-up commit for
        284ac6f2
      to comment what was done, as the patch itself did not have
      change comments.
      
      Problems solved in this patch:
      
      1. The function calc_hash_for_unique() erroneously takes into account
      the string length, so equal strings (in terms of the collation)
      with different lengths got different hash value.
      
      For example:
      - LATIN LETTER A             - 1 byte
      - LATIN LETTER A WITH ACUTE  - 2 bytes
      
      are equal in utf8_general_ci, but as their lengths
      are different, calc_hash_for_unique() returned
      different hash values.
      
      2. calc_hash_for_unique() also erroneously used val_str()
      result to calculate hashes. This may not be correct for
      some data types, e.g. TIMESTAMP, as its string
      value depends on the session environment (e.g. @@time_zone).
      
      Change summary:
      
      Instead of doing Item::val_str(), we should always call
      Field::hash() of the underlying Field. It properly
      handles both cases (equal strings with different
      lengths, as well as tricky data types like TIMESTAMP).
      
      Detailed change description:
      
      Non-functional changes (make the code cleaner):
      
      - Adding a helper class Hasher, to pass hash parts
        nr1 and nr2 through function arguments easier.
      - Splitting virtual Field::hash() into non-virtual
        wrapper Field::hash() and virtual Field::hash_not_null().
        This helps to get rid of duplicate code handling SQL NULL,
        as it was equal in all Field_xxx implementations.
      - Adding a new method THD::my_ok_with_recreate_info().
      
      Actual fix changes (make new tables work properly):
      
      - Adding a virtual method Item::hash_not_null()
        This helps to handle hashes on full fields (Item_field)
        and hashes on prefix fields (Item_func_left(Item_field))
        in a polymorphic way.
        Implementing overrides for Item_field and Item_func_left.
      
      - Rewriting Item_func_hash::val_int() to use Item::hash_not_null(),
        instead of the combination of val_str() and alc_hash_for_unique().
      
      Backward compatibility changes (make old tables work in the new server):
      
      - Adding a new class Item_func_hash_mariadb_100403.
        Moving the old version of Item_func_hash::val_int()
        into Item_func_hash_mariadb_100403::val_int().
        The old class Item_func_hash_mariadb_100403 is still needed,
        to open old tables before upgrade is done.
      
      - Adding TABLE_SHARE::old_long_hash_function() and
        handler::check_long_hash_compatibility() to test
        if a table is using an old hash function.
      
      - Adding a helper method TABLE_SHARE::make_long_hash_func()
        to instantiate either Item_func_hash_mariadb_100403 (for old
        not upgraded tables) or Item_func_hash (for new tables).
      
      Upgrade changes (make old tables upgrade in the new server properly):
      
      Upgrading an old table to a new hash can be done using either
      of these two statements:
      
        ALTER IGNORE TABLE t1 FORCE;
        REPAIR TABLE t1;
      
      !!! These statements find and filter out erreneous duplicates!!!
      The table after these statements will have less records
      if there were erroneous duplicates (such and A and A WITH ACUTE).
      
      The information about filtered out records is reported in both statements.
      
      - Adding a new class Recreate_info to return out information
        about copied and duplucate rows from these functions:
        - mysql_alter_table()
        - mysql_recreate_table()
        - admin_recreate_table()
        This helps to print a warning during REPAIR:
      
      MariaDB [test]> repair table mdev27653_100422_text;
      +----------------------------+--------+----------+------------------------------------+
      | Table                      | Op     | Msg_type | Msg_text                           |
      +----------------------------+--------+----------+------------------------------------+
      | test.mdev27653_100422_text | repair | Warning  | Number of rows changed from 2 to 1 |
      | test.mdev27653_100422_text | repair | status   | OK                                 |
      +----------------------------+--------+----------+------------------------------------+
      2 rows in set (0.018 sec)
      eea9f2a1
    • Daniele Sciascia's avatar
      ae96e21c
    • Yuchen Pei's avatar
      MDEV-26541 Make UBSAN builds work with spider again. · 0253a2f4
      Yuchen Pei authored
      When built with ubsan and trying to load the spider plugin, the hidden
      visibility of mysqld compiling flag causes ha_spider.so to be missing
      the symbol ha_partition. This commit fixes that, as well as some
      memcpy null pointer issues when built with ubsan.
      Signed-off-by: default avatarYuchen Pei <yuchen.pei@mariadb.com>
      0253a2f4
    • Alexander Barkov's avatar
    • Oleg Smirnov's avatar
      MDEV-29294 Assertion `functype() == ((Item_cond *) new_item)->functype()'... · afb5deb9
      Oleg Smirnov authored
      MDEV-29294 Assertion `functype() == ((Item_cond *) new_item)->functype()' failed in Item_cond::remove_eq_conds on SELECT
      
      Item_singlerow_subselect may be converted to Item_cond during
      optimization. So there is a possibility of constructing nested
      Item_cond_and or Item_cond_or which is not allowed (such
      conditions must be flattened).
      This commit checks if such kind of optimization has been applied
      and flattens the condition if needed
      afb5deb9
    • Alexander Barkov's avatar
      MDEV-27653 long uniques don't work with unicode collations · c256998b
      Alexander Barkov authored
      There are no source code changes in this commit!
      This is an empty follow-up commit for
        284ac6f2
      to comment what was done, as the patch itself did not have
      change comments.
      
      Problems solved in this patch:
      
      1. The function calc_hash_for_unique() erroneously takes into account
      the string length, so equal strings (in terms of the collation)
      with different lengths got different hash value.
      
      For example:
      - LATIN LETTER A             - 1 byte
      - LATIN LETTER A WITH ACUTE  - 2 bytes
      
      are equal in utf8_general_ci, but as their lengths
      are different, calc_hash_for_unique() returned
      different hash values.
      
      2. calc_hash_for_unique() also erroneously used val_str()
      result to calculate hashes. This may not be correct for
      some data types, e.g. TIMESTAMP, as its string
      value depends on the session environment (e.g. @@time_zone).
      
      Change summary:
      
      Instead of doing Item::val_str(), we should always call
      Field::hash() of the underlying Field. It properly
      handles both cases (equal strings with different
      lengths, as well as tricky data types like TIMESTAMP).
      
      Detailed change description:
      
      Non-functional changes (make the code cleaner):
      
      - Adding a helper class Hasher, to pass hash parts
        nr1 and nr2 through function arguments easier.
      - Splitting virtual Field::hash() into non-virtual
        wrapper Field::hash() and virtual Field::hash_not_null().
        This helps to get rid of duplicate code handling SQL NULL,
        as it was equal in all Field_xxx implementations.
      - Adding a new method THD::my_ok_with_recreate_info().
      
      Actual fix changes (make new tables work properly):
      
      - Adding a virtual method Item::hash_not_null()
        This helps to handle hashes on full fields (Item_field)
        and hashes on prefix fields (Item_func_left(Item_field))
        in a polymorphic way.
        Implementing overrides for Item_field and Item_func_left.
      
      - Rewriting Item_func_hash::val_int() to use Item::hash_not_null(),
        instead of the combination of val_str() and alc_hash_for_unique().
      
      Backward compatibility changes (make old tables work in the new server):
      
      - Adding a new class Item_func_hash_mariadb_100403.
        Moving the old version of Item_func_hash::val_int()
        into Item_func_hash_mariadb_100403::val_int().
        The old class Item_func_hash_mariadb_100403 is still needed,
        to open old tables before upgrade is done.
      
      - Adding TABLE_SHARE::old_long_hash_function() and
        handler::check_long_hash_compatibility() to test
        if a table is using an old hash function.
      
      - Adding a helper method TABLE_SHARE::make_long_hash_func()
        to instantiate either Item_func_hash_mariadb_100403 (for old
        not upgraded tables) or Item_func_hash (for new tables).
      
      Upgrade changes (make old tables upgrade in the new server properly):
      
      Upgrading an old table to a new hash can be done using either
      of these two statements:
      
        ALTER IGNORE TABLE t1 FORCE;
        REPAIR TABLE t1;
      
      !!! These statements find and filter out erreneous duplicates!!!
      The table after these statements will have less records
      if there were erroneous duplicates (such and A and A WITH ACUTE).
      
      The information about filtered out records is reported in both statements.
      
      - Adding a new class Recreate_info to return out information
        about copied and duplucate rows from these functions:
        - mysql_alter_table()
        - mysql_recreate_table()
        - admin_recreate_table()
        This helps to print a warning during REPAIR:
      
      MariaDB [test]> repair table mdev27653_100422_text;
      +----------------------------+--------+----------+------------------------------------+
      | Table                      | Op     | Msg_type | Msg_text                           |
      +----------------------------+--------+----------+------------------------------------+
      | test.mdev27653_100422_text | repair | Warning  | Number of rows changed from 2 to 1 |
      | test.mdev27653_100422_text | repair | status   | OK                                 |
      +----------------------------+--------+----------+------------------------------------+
      2 rows in set (0.018 sec)
      c256998b
    • Daniele Sciascia's avatar
      c4f5128d
    • Yuchen Pei's avatar
      MDEV-26541 Make UBSAN builds work with spider again. · a68b9dd9
      Yuchen Pei authored
      When built with ubsan and trying to load the spider plugin, the hidden
      visibility of mysqld compiling flag causes ha_spider.so to be missing
      the symbol ha_partition. This commit fixes that, as well as some
      memcpy null pointer issues when built with ubsan.
      Signed-off-by: default avatarYuchen Pei <yuchen.pei@mariadb.com>
      a68b9dd9
  2. 19 Jan, 2023 2 commits
  3. 17 Jan, 2023 3 commits
    • Oleksandr Byelkin's avatar
      v5.5.4-stable · 9924466b
      Oleksandr Byelkin authored
      9924466b
    • Marko Mäkelä's avatar
      Merge 10.3 into 10.4 · 2b3423c4
      Marko Mäkelä authored
      2b3423c4
    • Marko Mäkelä's avatar
      MDEV-30422 Merge new release of InnoDB 5.7.41 to 10.3 · 489b5569
      Marko Mäkelä authored
      MySQL 5.7.41 includes one InnoDB change
      mysql/mysql-server@d2d6b2dd00f709bc528386009150d4bc726e25a0
      that seems to be applicable to MariaDB Server 10.3 and 10.4.
      Even though commit 5b9ee8d8
      seems to have fixed sporadic failures on our CI systems, it is
      theoretically possible that another race condition remained.
      
      buf_flush_page_cleaner_coordinator(): In the final loop,
      wait also for buf_get_n_pending_read_ios() to reach 0.
      In this way, if a secondary index leaf page was read into the
      buffer pool and ibuf_merge_or_delete_for_page() modified that
      page or some change buffer pages, the flush loop would execute
      until the buffer pool really is in a clean state.
      
      This potential data corruption bug does not affect MariaDB Server 10.5
      or later, thanks to commit b42294bc
      which removed change buffer merges that are not explicitly requested.
      489b5569
  4. 16 Jan, 2023 1 commit
  5. 15 Jan, 2023 1 commit
  6. 14 Jan, 2023 1 commit
  7. 13 Jan, 2023 5 commits
    • sjaakola's avatar
      10.4-MDEV-29684 Fixes for cluster wide write conflict resolving · 0ff7f33c
      sjaakola authored
      The rather recent thd_need_ordering_with() function does not take
      high priority transactions' order in consideration. Chaged this
      funtion to compare also transaction seqnos and favor earlier transaction.
      Reviewed-by: default avatarJan Lindström <jan.lindstrom@mariadb.com>
      0ff7f33c
    • sjaakola's avatar
      MDEV-29512 deadlock between commit monitor and THD::LOCK_thd_data mutex · 68cfcf9c
      sjaakola authored
      This commit contains only a mtr test for reproducing the issue in MDEV-29512
      The actual fix will be pushed in wsrep-lib repository
      
      The hanging in MDEV-29512 happens when binlog purging is attempted, and there is
      one local BF aborted transaction waiting for commit monitor.
      
      The test will launch two node cluster and enable binlogging with expire log days,
      to force binlog purging to happen.
      A local transaction is executed so that will become BF abort victim, and has advanced
      to replication stage waiting for commit monitor for final cleanup (to mark position in innodb)
      after that, applier is released to complete the BF abort and due to binlog configuration,
      starting the binlog purging. This is where the hanging would occur, if code is buggy
      Reviewed-by: default avatarJan Lindström <jan.lindstrom@mariadb.com>
      68cfcf9c
    • sjaakola's avatar
      MDEV-30317 Transaction savepoint may cause failure in galera replaying · cd97523d
      sjaakola authored
      Created mtr test for reproducing the crash
      
      Developed actual fix for the issue.
      Setting THD::system_thread_info.rpl_sql_info for replayer thread,
      same way as it is handled for appliers.
      
      Recorded test result, with the fix
      Reviewed-by: default avatarJan Lindström <jan.lindstrom@mariadb.com>
      cd97523d
    • sjaakola's avatar
      MDEV-29684 Fixes for cluster wide write conflict resolving · 66c05326
      sjaakola authored
      Cluster conflict victim's THD is marked with wsrep_aborter.
      THD::wsrep_aorter holds the thread ID of the hight priority tread,
      which is currently carrying out BF aborting for this victim.
      
      However, the BF abort operation is not always successful,
      and in such case the wsrep_aborter mark should be removed.
      In the old code, this wsrep_aborter resetting did not happen,
      and this could lead to a situation where the sticky wsrep_aborter
      mark prevents any further attempt to BF abort this transaction.
      
      This commit fixes this issue, and resets wsrep_aborter after
      unsuccesful BF abort attempt.
      Reviewed-by: default avatarJan Lindström <jan.lindstrom@mariadb.com>
      66c05326
    • Marko Mäkelä's avatar
      Merge 10.3 into 10.4 · 71e8e493
      Marko Mäkelä authored
      71e8e493
  8. 12 Jan, 2023 5 commits
  9. 11 Jan, 2023 7 commits
    • Monty's avatar
      MDEV-30345 DML does not find rows it is supposed to · f3d8a546
      Monty authored
      This only happens with 'timestamp_column IN (select ...)
      
      The reason was a missing assignment in Item_cache_timestamp::cache_value()
      f3d8a546
    • Brandon Nesterenko's avatar
      MDEV-25277: mysqlbinlog --verbose cannot read row events with compressed... · b194c83b
      Brandon Nesterenko authored
      MDEV-25277: mysqlbinlog --verbose cannot read row events with compressed columns: Don't know how to handle column type: 140
      
      Problem:
      =======
      Mysqlbinlog cannot show the type of a compressed
      column when two levels of verbosity is provided.
      
      Solution:
      ========
      Extend the log event printing logic to handle and
      tag compressed types.
      
      Behavioral Changes:
      ==================
        Old: When mysqlbinlog is called in verbose mode and
      the database uses compressed columns, an error is
      returned to the user.
      
        New: The output will append “ COMPRESSED” on the
      type of compressed columns
      
      Reviewed By
      ===========
      Andrei Elkin <andrei.elkin@mariadb.com>
      b194c83b
    • Julius Goryavsky's avatar
      MDEV-30220: rsync SST completely ignores aria-log-dir-path · 53c4be7b
      Julius Goryavsky authored
      This commit adds support for the --aria-log-dir-path
      option on the command line and for the aria-log-dir-path
      option in the configuration file to the SST scripts, since
      before this change these parameters were completely ignored
      during SST - SST scripts assumed that aria logs files are
      always located in the same directory as logs for innodb.
      
      Tests for this change will be added as a separate commit,
      along with tests for MDEV-30157 and MDEV-28669.
      53c4be7b
    • Julius Goryavsky's avatar
      MDEV-30157: Galera SST doesn't properly handle undo* files from innodb · b84f3fa7
      Julius Goryavsky authored
      This fix adds separate handling for "undo*" files that contain undo
      logs as part of innodb files and adds a filter for undo* to the main
      filter used when initially transferring files with rsync.
      b84f3fa7
    • Julius Goryavsky's avatar
      pre-MDEV-30157 & pre-MDEV-28669: fixes before the main corrections · e4a4aad7
      Julius Goryavsky authored
      This commit adds even more correct handling of parameters
      with paths when they contain leading or trailing spaces and/or
      slashes. Also it fixes problems that occur when the user specified
      explicit paths to additional directories, but these paths match
      the specified path of the data directory - in this case, additional
      subdirectories should be treated (in relation to the data directory)
      in the same way as if these paths were not specified or as if they
      are implicitly specified as "." or "./". But prior to this fix,
      existing code treated any values as if they were completely
      separate directories, whether or not they actually point to the
      same location to which datadir points to - and this sometimes
      resulted in incorrect file transfers.
      
      This fix does not contain separate tests, as tests will be
      part of the main commit(s). This fix has been made as a separate
      commit to facilitate review for major substantive fixes related
      to MDEV-30157 and MDEV-28669.
      e4a4aad7
    • Sergei Petrunia's avatar
      MDEV-28602 Wrong result with outer join, merged derived table and view · b928c849
      Sergei Petrunia authored
      (Variant 3, initial variant was by Rex Jonston)
      
      A LEFT JOIN with a constant as a column of the inner table produced wrong
      query result if the optimizer had to write the inner table column into a
      temp table. Query pattern:
      
        SELECT ...
        FROM (SELECT /*non-mergeable select*/
              FROM t1 LEFT JOIN (SELECT 'Y' as Val) t2 ON ...) as tbl
      
      Fixed this by adding Item_direct_view_ref::save_in_field() which follows
      the pattern of Item_direct_view_ref's save_org_in_field(),
      save_in_result_field() and val_XXX() functions:
      * call check_null_ref() and handle NULL value
      * if we didn't get a NULL-complemented row, call Item_direct_ref's function.
      b928c849
    • Marko Mäkelä's avatar
      Remove an unused parameter · b218dfea
      Marko Mäkelä authored
      lock_rec_has_to_wait(): Remove the unused parameter for_locking
      that had been originally added
      in commit df4dd593
      b218dfea
  10. 10 Jan, 2023 2 commits
    • Sergei Golubchik's avatar
      Merge branch '10.3' into 10.4 · fdcfc251
      Sergei Golubchik authored
      fdcfc251
    • Daniel Black's avatar
      clang15 warnings - unused vars and old prototypes · 56948ee5
      Daniel Black authored
      clang15 finally errors on old prototype definations.
      
      Its also a lot fussier about variables that aren't used
      as is the case a number of time with loop counters that
      aren't examined.
      
      RocksDB was complaining that its get_range function was
      declared without the array length in ha_rocksdb.h. While
      a constant is used rather than trying to import the
      Rdb_key_def::INDEX_NUMBER_SIZE header (was causing a lot of
      errors on the defination of other orders). If the constant
      does change can be assured that the same compile warnings will
      tell us of the error.
      
      The ha_rocksdb::index_read_map_impl DBUG_EXECUTE_IF was similar
      to the existing endless functions used in replication tests.
      Its rather moot point as the rocksdb.force_shutdown test that
      uses myrocks_busy_loop_on_row_read is currently disabled.
      56948ee5
  11. 09 Jan, 2023 1 commit
    • Sergei Golubchik's avatar
      MDEV-17869 AddressSanitizer: use-after-poison in Item_change_list::rollback_item_tree_changes · 6cb84346
      Sergei Golubchik authored
      it's incorrect to use change_item_tree() to replace arguments
      of top-level AND/OR, because they (arguments) are stored in a List,
      so a pointer to an argument is in the list_node, and individual
      list_node's of top-level AND/OR can be deleted in Item_cond::build_equal_items().
      In that case rollback_item_tree_changes() will modify the deleted object.
      
      Luckily, it's not needed to use change_item_tree() for top-level
      AND/OR, because the whole top-level item is copied and preserved
      in prep_where and prep_on, and restored from there.
      
      So, just don't.
      
      Additionally to the test case in the commit it fixes
      * ASAN failure of main.opt_tvc --ps
      * ASAN failure of main.having_cond_pushdown --ps
      6cb84346