1. 23 Jan, 2024 16 commits
    • Monty's avatar
      Fixed compiler error/warning in backup_copy.cc · 6085fb19
      Monty authored
      6085fb19
    • Monty's avatar
      Fixed main.strict test to work with icc compiler · 286d6f23
      Monty authored
      286d6f23
    • Monty's avatar
      Disable main.gis from embedded · d2c431bc
      Monty authored
      Fails with:
      query 'select ST_AsWKT(GeometryCollection(Point(44, 6), @g))' failed:
      ER_ILLEGAL_VALUE_FOR_TYPE (1367): Illegal non geometric '@`g`' value
      found during parsing
      d2c431bc
    • Monty's avatar
      MDEV-33279 Disable transparent huge pages after page buffers has been... · c777429c
      Monty authored
      MDEV-33279 Disable transparent huge pages after page buffers has been allocatedDisable transparent huge pages (THP)
      
      The reason for disabling transparent huge pages (THP) is that they
      do not work well with MariaDB (or other databases, see links in
      MDEV-33279). The effect of using THP are that MariaDB will use much more
      (10x) more memory and will no be able to release memory back to the system.
      
      Disabling THP is done after all storage engines are started, to allow
      buffer pools and keybuffers (big allocations) to be allocated as huge
      pages.
      c777429c
    • Monty's avatar
      Trivial fixes: · 740d3e7a
      Monty authored
      - Removed not used variable 'file' from MYSQL_BIN_LOG::open()
      - Assigned not initialized variable in connect/tabext.cpp
      740d3e7a
    • Monty's avatar
      Temporary table name used by multip-update has 'null' as part of the name · 3c63e2f8
      Monty authored
      Problem was that TMP_TABLE_PARAM was not properly initialized
      3c63e2f8
    • Monty's avatar
      MDEV-33118 optimizer_adjust_secondary_key_costs variable · 6f65e082
      Monty authored
      optimizer-adjust_secondary_key_costs is added to provide 2 small
      adjustments to the 10.x optimizer cost model. This can be used in the
      case where the optimizer wrongly uses a secondary key instead of a
      clustered primary key.
      
      The reason behind this change is that MariaDB 10.x does not take into
      account that for engines like InnoDB, that scanning a primary key can be
      up to 7x faster than scanning a secondary key + read the row data trough
      the primary key.
      
      The different values for optimizer_adjust_secondary_key_costs are:
      
      optimizer_adjust_secondary_key_costs=0
      - No changes to current model
      
      optimizer_adjust_secondary_key_costs=1
      - Ensure that the cost of of secondary indexes has a cost of at
        least 5x times the cost of a clustered primary key (if one exists).
        This disables part of the worst_seek optimization described below.
      
      optimizer_adjust_secondary_key_costs=2
      - Disable "worst_seek optimization" and adjust filter cost slightly
        (add cost of 1 if filter is used).
      
      The idea behind 'worst_seek optimization' is that we limit the
      cost for all non clustered ref access to the least of:
      - best-rows-by-range (or all rows in no range found) / 10
      - scan-time-table (roughly number of file blocks to scan table) * 3
      
      In addition we also do not try to use rowid_filter if number of rows
      estimated for 'ref' access is less than the worst_seek limitation.
      
      The idea is that worst_seek is trying to take into account that if
      we do a lot of accesses through a key, this is likely to be cached.
      However it only does this for secondary keys, and not for clustered
      keys or index only reads.
      
      The effect of the worst_seek are:
      - In some cases 'ref' will have a much lower cost than range or using
        a clustered key.
      - Some possible rowid filters for secondary keys will be ignored.
      
      When implementing optimizer_adjust_secondary_key_costs=2, I noticed
      that there is a slightly different costs for how ref+filter and
      range+filter are calculated.  This caused a lot of range and
      range+filter to change to ref+filter, which is not good as
      range+filter provides the optimizer a better estimate of how many
      accepted rows there will be in the result set.
      Adding a extra small cost (1 seek) when using filter mitigated the
      above problems in almost all cases.
      
      This patch should not be applied to MariaDB 11.0 as worst_seeks is
      removed in 11.0 and the cost calculation for clustered keys, secondary
      keys, index scan and filter is more exact.
      
      Test case changes for --optimizer-adjust_secondary_key_costs=1
      (Fix secondary key costs to be 5x of primary key):
      
      - stat_tables_innodb:
        - Complex change (probably ok as number of rows are really small)
          - ref over 1 row changed to range over 10 rows with join buffer
          - ref over 5 rows changed to eq_ref
          - secondary ref over 1 row changed to ref of primary key over 4 rows
          - Change of key to use longer key with index pushdown (a little
            bit worse but not significant).
        - Change to use secondary (1 row) -> primary (4 rows)
      - rowid_filter_innodb:
        - index_merge (2 rows) & ref (1) -> all (23 rows) -> primary eq_ref.
      
      Test case changes for --optimizer-adjust_secondary_key_costs=2
      (remove of worst_seeks & adjust filter cost):
      
      - stat_tables_innodb:
        - Join order change (probably ok as number of rows are really small)
        - ref (5 rows) & ref(1 row) changed to range (10 rows & join buffer)
          & eq_ref.
      - selectivity_innodb:
        - ref -> ref|filter  (ok)
      - rowid_filter_innodb:
        - ref -> ref|filter (ok)
        - range|filter (64 rows) changed to ref|filter (128 rows).
          ok as ref|filter outputs wrong number of rows in explain.
      - range, range_mrr_icp:
        -ref (500 rows -> ALL (1000 rows) (ok)
      - select_pkeycache, select, select_jcl6:
        - ref|filter (2 rows) -> ref (2 rows) (ok)
      - selectivity:
        - ref -> ref_filter (ok)
      - range:
        - Change of 'filtered' but no stat or plan change (ok)
      - selectivity:
       - ref -> ref+filter (ok)
       - Change of filtered but no plan change (ok)
      - join_nested_jcl6:
        - range -> ref|filter (ok as only 2 rows)
      - subselect3, subselect3_jcl6:
        - ref_or_null (4 rows) -> ALL (10 rows) (ok)
        - Index_subquery (4 rows) -> ALL (10 rows)  (ok)
      - partition_mrr_myisam, partition_mrr_aria and partition_mrr_innodb:
        - Uses ALL instead of REF for a key value that is the same for > 50%
          of rows.  (good)
      order_by_innodb:
        - range (200 rows) -> ref (20 rows)+filesort (ok)
      - subselect_sj2_mat:
        - One test changed. One ALL removed and replaced with eq_ref. Likely
          to be better.
      - join_cache:
        - Changed ref over 60% of the rows to use hash join (ok)
      - opt_tvc:
        - Changed to use eq_ref instead of ref with plan change (probably ok)
      - opt_trace:
        - No worst/max seeks clipping (good).
        - Almost double range_scan_time and index_scan_time (ok).
      - rowid_filter:
        - ref -> ref|filtered (ok)
        - range|filter (77 rows) changed to ref|filter (151 rows).  Proably
          ok as ref|filter outputs wrong number of rows in explain.
      
      Reviewer: Sergei Petrunia <sergey@mariadb.com>
      6f65e082
    • Monty's avatar
      Ensure keyread_tmp variable is assigned. · c49fd902
      Monty authored
      In the case of calcuating cost for a ref access for which there exists
      a usable range, the variable keyread_tmp would always be 0.
      If there is also another index that could be used as a filter, the cost
      of that filter would be wrong.
      In many cases 'the worst_seeks optimzation' would disable the filter
      from getting used, which is why this bug has not been noticed before.
      
      The next commit, which allows one to disable worst_seeks, will have a
      test case for this bug.
      c49fd902
    • Monty's avatar
      Fixed possible mutex-wrong-order with KILL USER · 2fcb5d65
      Monty authored
      The old code collected a list of THD's, locked the THD's from getting
      deleted by locking two mutex and then later in a separate loop
      sent a kill signal to each THD.
      
      The problem with this approach is that, as THD's can be reused,
      the second time the THD is killed, the mutex can be taken in
      different order, which signals failures in safe_mutex.
      
      Fixed by sending the kill signal directly and not collect the THD's
      in a list to be signaled later.  This is the same approach we are using
      in kill_zombie_dump_threads().
      
      Other things:
      - Reset safe_mutex_t->locked_mutex when freed (Safety fix)
      2fcb5d65
    • Monty's avatar
      Added socketpair.c as a replacement for 'pipe()' call for Windows. · 1ca813bf
      Monty authored
      This was needed to get semisync to work on Windows.
      1ca813bf
    • Michael Widenius's avatar
      MDEV-32551: "Read semi-sync reply magic number error" warnings on master · 7af50e4d
      Michael Widenius authored
      rpl_semi_sync_slave_enabled_consistent.test and the first part of
      the commit message comes from Brandon Nesterenko.
      
      A test to show how to induce the "Read semi-sync reply magic number
      error" message on a primary. In short, if semi-sync is turned on
      during the hand-shake process between a primary and replica, but
      later a user negates the rpl_semi_sync_slave_enabled variable while
      the replica's IO thread is running; if the io thread exits, the
      replica can skip a necessary call to kill_connection() in
      repl_semisync_slave.slave_stop() due to its reliance on a global
      variable. Then, the replica will send a COM_QUIT packet to the
      primary on an active semi-sync connection, causing the magic number
      error.
      
      The test in this patch exits the IO thread by forcing an error;
      though note a call to STOP SLAVE could also do this, but it ends up
      needing more synchronization. That is, the STOP SLAVE command also
      tries to kill the VIO of the replica, which makes a race with the IO
      thread to try and send the COM_QUIT before this happens (which would
      need more debug_sync to get around). See THD::awake_no_mutex for
      details as to the killing of the replica’s vio.
      
      Notes:
      - The MariaDB documentation does not make it clear that when one
        enables semi-sync replication it does not matter if one enables
        it first in the master or slave. Any order works.
      
      Changes done:
      - The rpl_semi_sync_slave_enabled variable is now a default value for
        when semisync is started. The variable does not anymore affect
        semisync if it is already running. This fixes the original reported
        bug.  Internally we now use repl_semisync_slave.get_slave_enabled()
        instead of rpl_semi_sync_slave_enabled. To check if semisync is
        active on should check the @@rpl_semi_sync_slave_status variable (as
        before).
      - The semisync protocol conflicts in the way that the original
        MySQL/MariaDB client-server protocol was designed (client-server
        send and reply packets are strictly ordered and includes a packet
        number to allow one to check if a packet is lost). When using
        semi-sync the master and slave can send packets at 'any time', so
        packet numbering does not work. The 'solution' has been that each
        communication starts with packet number 1, but in some cases there
        is still a chance that the packet number check can fail.  Fixed by
        adding a flag (pkt_nr_can_be_reset) in the NET struct that one can
        use to signal that packet number checking should not be done. This
        is flag is set when semi-sync is used.
      - Added Master_info::semi_sync_reply_enabled to allow one to configure
        some slaves with semisync and other other slaves without semisync.
        Removed global variable semi_sync_need_reply that would not work
        with multi-master.
      - Repl_semi_sync_master::report_reply_packet() can now recognize
        the COM_QUIT packet from semisync slave and not give a
        "Read semi-sync reply magic number error" error for this case.
        The slave will be removed from the Ack listener.
      - On Windows, don't stop semisync Ack listener just because one
        slave connection is using socket_id > FD_SETSIZE.
      - Removed busy loop in Ack_receiver::run() by using
       "Self-pipe trick" to signal new slave and stop Ack_receiver.
      - Changed some Repl_semi_sync_slave functions that always returns 0
        from int to void.
      - Added Repl_semi_sync_slave::slave_reconnect().
      - Removed dummy_function Repl_semi_sync_slave::reset_slave().
      - Removed some duplicate semisync notes from the error log.
      - Add test of "if (get_slave_enabled() && semi_sync_need_reply)"
        before calling Repl_semi_sync_slave::slave_reply().
        (Speeds up the code as we can skip all initializations).
      - If epl_semisync_slave.slave_reply() fails, we disable semisync
        for that connection.
      - We do not call semisync.switch_off() if there are no active slaves.
        Instead we check in Repl_semi_sync_master::commit_trx() if there are
        no active threads. This simplices the code.
      - Changed assert() to DBUG_ASSERT() to ensure that the DBUG log is
        flushed in case of asserts.
      - Removed the internal rpl_semi_sync_slave_status as it is not needed
        anymore. The @@rpl_semi_sync_slave_status status variable is now
        mapped to rpl_semi_sync_enabled.
      - Removed rpl_semi_sync_slave_enabled  as it is not needed anymore.
        Repl_semi_sync_slave::get_slave_enabled() contains the active status.
      - Added checking that we do not add a slave twice with
        Ack_receiver::add_slave(). This could happen with old code.
      - Removed Repl_semi_sync_master::check_and_switch() as it is not
        needed anymore.
      - Ensure that when we call Ack_receiver::remove_slave() that the slave
        is removed from the listener before function returns.
      - Call listener.listen_on_sockets() outside of mutex for better
        performance and less contested mutex.
      - Ensure that listening is ignoring newly added slaves when checking for
        responses.
      - Fixed the master ack_receiver listener is not killed if there are no
        connected slaves (and thus stop semisync handling of future
        connections). This could happen if all slaves sockets where would be
        marked as unreliable.
      - Added unlink() to base_ilist_iterator and remove() to
        I_List_iterator. This enables us to remove 'dead' slaves in
        Ack_recever::run().
      - kill_zombie_dump_threads() now does killing of dump threads properly.
        - It can now kill several threads (should be impossible but could
          happen if IO slaves reconnects very fast).
        - We now wait until the dump thread is done before starting the
          dump.
      - Added an error if kill_zombie_dump_threads() fails.
      - Set thd->variables.server_id before calling
        kill_zombie_dump_threads(). This simplies the code.
      - Added a lot of comments both in code and tests.
      - Removed DBUG_EVALUATE_IF "failed_slave_start" as it is not used.
      
      Test changes:
      - rpl.rpl_session_var2 added which runs rpl.rpl_session_var test with
        semisync enabled.
      - Some timings changed slight with startup of slave which caused
        rpl_binlog_dump_slave_gtid_state_info.text to fail as it checked the
        error log file before the slave had started properly. Fixed by
        adding wait_for_pattern_in_file.inc that allows waiting for the
        pattern to appear in the log file.
      - Tests have been updated so that we first set
        rpl_semi_sync_master_enabled on the master and then set
        rpl_semi_sync_slave_enabled on the slaves (this is according to how
        the MariaDB documentation document how to setup semi-sync).
      - Error text "Master server does not have semi-sync enabled" has been
        replaced with "Master server does not support semi-sync" for the
        case when the master supports semi-sync but semi-sync is not
        enabled.
      
      Other things:
      - Some trivial cleanups in Repl_semi_sync_master::update_sync_header().
      - We should in 11.3 changed the default value for
        rpl-semi-sync-master-wait-no-slave from TRUE to FALSE as the TRUE
        does not make much sense as default. The main difference with using
        FALSE is that we do not wait for semisync Ack if there are no slave
        threads.  In the case of TRUE we wait once, which did not bring any
        notable benefits except slower startup of master configured for
        using semisync.
      
      Co-author: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
      
      This solves the problem reported in MDEV-32960 where a new
      slave may not be registered in time and the master disables
      semi sync because of that.
      7af50e4d
    • Rucha Deodhar's avatar
      MDEV-32906: The SQL error plugin prints (null) as database if the mariadb · ee7cc0a4
      Rucha Deodhar authored
      client is not using any database to execute the SQL.
      
      Analysis:
      When there is no database, the database string is NULL so (null) gets
      printed.
      Fix:
      Print NULL instead of (null) because when there is no database SELECT
      DATABASE() return NULL. SO NULL is more appropriate choice.
      ee7cc0a4
    • Rucha Deodhar's avatar
      MDEV-27087: Add thread ID and database / table, where the error occured · 90cd712b
      Rucha Deodhar authored
      to SQL error plugin
      
      New plugin variable "with_db_and_thread_info" is added which prints the
      thread id and databse name to the logfile. the value is stored in variable
      "with_db_and_thread_info"
      
      log_sql_errors() is responsible for printing in the log. If detailed is
      enabled, print thread id and database name both, otherwise skip it.
      90cd712b
    • Daniel Black's avatar
      Merge remote-tracking branch 10.5 into 10.6 · 4ef9c9bb
      Daniel Black authored
      Notably MDEV-33290, columnstore disable stays in 10.5.
      4ef9c9bb
    • Daniel Black's avatar
      MDEV-33290: Disable ColumnStore based on boost version · 5ce6a352
      Daniel Black authored
      MCOL-5611 supporting with Boost-1.80, the version "next_prime"
      disappears from https://github.com/boostorg/unordered/blob/boost-1.79.0/include/boost/unordered/detail/implementation.hpp
      makes it the currenly highest supported versions.
      
      Lets check this version.
      
      While CMake-3.19+ supports version ranges in package determinations this
      isn't supported for Boost in Cmake-3.28. So we check for the 1.80 and
      don't compile ColumnStore.
      5ce6a352
    • Dmitry Shulga's avatar
      sql_test.cc compile fix · 13e49b78
      Dmitry Shulga authored
      13e49b78
  2. 22 Jan, 2024 7 commits
  3. 19 Jan, 2024 8 commits
  4. 18 Jan, 2024 1 commit
  5. 17 Jan, 2024 4 commits
    • Sophist's avatar
    • Marko Mäkelä's avatar
      Merge 10.5 into 10.6 · 3a96eba2
      Marko Mäkelä authored
      3a96eba2
    • Marko Mäkelä's avatar
      MDEV-30940: Try to fix the test · 6a514ef6
      Marko Mäkelä authored
      6a514ef6
    • Marko Mäkelä's avatar
      MDEV-33213 History list is not shrunk unless there is a pause in the workload · f8c88d90
      Marko Mäkelä authored
      The parameter innodb_undo_log_truncate=ON enables a multi-phased logic:
      1. Any "producers" (new starting transactions) are prohibited
      from using the rollback segments that reside in the undo tablespace.
      2. Any transactions that use any of the rollback segments must be
      committed or aborted.
      3. The purge of committed transaction history must process all the
      rollback segments.
      4. The undo tablespace is truncated and rebuilt.
      5. The rollback segments are re-enabled for new transactions.
      
      There was one flaw in this logic: The first step was not being invoked
      as often as it could be, and therefore innodb_undo_log_truncate=ON
      would have no chance to work during a heavy write workload.
      
      Independent of innodb_undo_log_truncate, even after
      commit 86767bcc
      we are missing some chances to free processed undo log pages.
      If we prohibited the creation of new transactions in one busy
      rollback segment at a time, we would be eventually guaranteed
      to be able to free such pages.
      
      purge_sys_t::skipped_rseg: The current candidate rollback segment
      for shrinking the history independent of innodb_undo_log_truncate.
      
      purge_sys_t::iterator::free_history_rseg(): Renamed from
      trx_purge_truncate_rseg_history(). Implement the logic
      around purge_sys.m_skipped_rseg.
      
      purge_sys_t::truncate_undo_space: Renamed from truncate.
      
      purge_sys.truncate_undo_space.last: Changed the type to integer
      to get rid of some pointer dereferencing and conditional branches.
      
      purge_sys_t::truncating_tablespace(), purge_sys_t::undo_truncate_try():
      Refactored from trx_purge_truncate_history().
      Set purge_sys.truncate_undo_space.current if applicable,
      or return an already set purge_sys.truncate_undo_space.current.
      
      purge_coordinator_state::do_purge(): Invoke
      purge_sys_t::truncating_tablespace() as part of the normal work loop,
      to implement innodb_undo_log_truncate=ON as often as possible.
      
      trx_purge_truncate_rseg_history(): Remove a redundant parameter.
      
      trx_undo_truncate_start(): Replace dead code with a debug assertion.
      
      Correctness tested by: Matthias Leich
      Performance tested by: Axel Schwenke
      Reviewed by: Debarun Banerjee
      f8c88d90
  6. 16 Jan, 2024 2 commits
  7. 15 Jan, 2024 1 commit
    • Thirunarayanan Balathandayuthapani's avatar
      MDEV-32968 InnoDB fails to restore tablespace first page from doublewrite... · caad34df
      Thirunarayanan Balathandayuthapani authored
      MDEV-32968  InnoDB fails to restore tablespace first page from doublewrite buffer when page is empty
      
      - InnoDB fails to find the space id from the page0 of
      the tablespace. In that case, InnoDB can use
      doublewrite buffer to recover the page0 and write
      into the file.
      
      - buf_dblwr_t::init_or_load_pages(): Loads only the pages
      which are valid.(page lsn >= checkpoint). To do that,
      InnoDB has to open the redo log before system
      tablespace, read the latest checkpoint information.
      
      recv_dblwr_t::find_first_page():
      1) Iterate the doublewrite buffer pages and find the 0th page
      2) Read the tablespace flags, space id from the 0th page.
      3) Read the 1st, 2nd and 3rd page from tablespace file and
      compare the space id with the space id which is stored
      in doublewrite buffer.
      4) If it matches then we can write into the file.
      5) Return space which matches the pages from the file.
      
      SysTablespace::read_lsn_and_check_flags(): Remove the
      retry logic for validating the first page. After
      restoring the first page from doublewrite buffer,
      assign tablespace flags by reading the first page.
      
      recv_recovery_read_max_checkpoint(): Reads the maximum
      checkpoint information from log file
      
      recv_recovery_from_checkpoint_start(): Avoid reading
      the checkpoint header information from log file
      
      Datafile::validate_first_page(): Throw error in case
      of first page validation fails.
      caad34df
  8. 14 Jan, 2024 1 commit