1. 12 Feb, 2024 3 commits
    • Brandon Nesterenko's avatar
      MDEV-29369: rpl.rpl_semi_sync_shutdown_await_ack fails regularly with Result content mismatch · 03d1346e
      Brandon Nesterenko authored
      This test was prone to failures for a few reasons, summarized below:
      
       1) MDEV-32168 introduced “only_running_threads=1” to
      slave_stop.inc, which allowed the stop logic to bypass an
      attempting-to-reconnect IO thread. That is, the IO thread could
      realize the master shutdown in `read_event()`, and thereby call into
      `try_to_reconnect()`. This would leave the IO thread up when the
      test expected it to be stopped. Fixed by explicitly stopping the
      IO thread and allowing an error state, as the above case would
      lead to errno 2003.
      
       2) On slow systems (or those running profiling tools, e.g. MSAN),
      the waiting-for-ack transaction can complete before the system
      processes the `SHUTDOWN WAIT FOR ALL SLAVES`. There was shutdown
      preparation logic in-between the transaction and shutdown itself,
      which contributes to this problem. This patch also moves this
      preparation logic before the transaction, so there is less to do
      in-between the calls.
      
       3) Changed work-around for MDEV-28141 to use debug_sync instead
      of sleep delay, as it was still possible to hit the bug on very
      slow systems.
      
       4) Masked MTR variable reset with disable/enable query log
      
      Reviewed By:
      ============
      Kristian Nielsen <knielsen@knielsen-hq.org>
      03d1346e
    • Brandon Nesterenko's avatar
      MDEV-14357: rpl.rpl_domain_id_filter_io_crash failed in buildbot with wrong result · ee895583
      Brandon Nesterenko authored
      A race condition with the SQL thread, where depending on if it was
      killed before or after it had executed the fake/generated IGN_GTIDS
      Gtid_list_log_event, may or may not update gtid_slave_pos with the
      position of the ignored events. Then, the slave would be restarted
      while resetting IGNORE_DOMAIN_IDS to be empty, which would result in
      the slave requesting different starting locations, depending on
      whether or not gtid_slave_pos was updated. And, because previously
      ignored events could now be requested and executed (no longer
      ignored), their presence would fail the test.
      
      This patch fixes this in two ways. First, to use GTID positions for
      synchronization rather than binlog file positions. Then second, to
      synchronize the SQL thread’s gtid_slave_pos with the ignored events
      before killing the SQL thread.
      
      To consistently reproduce the test failure, the following patch can
      be applied:
      
      diff --git a/sql/log_event_server.cc b/sql/log_event_server.cc
      index f51f5b7deec..de62233acff 100644
      --- a/sql/log_event_server.cc
      +++ b/sql/log_event_server.cc
      @@ -3686,6 +3686,12 @@ Gtid_list_log_event::do_apply_event(rpl_group_info *rgi)
           void *hton= NULL;
           uint32 i;
      
      +    sleep(1);
      +    if (rli->sql_driver_thd->killed || rli->abort_slave)
      +    {
      +      return 0;
      +    }
      +
      
      Reviewed By:
      ============
      Kristian Nielsen <knielsen@knielsen-hq.org>
      ee895583
    • Marko Mäkelä's avatar
      Merge 10.4 into 10.5 · 8ec12e0d
      Marko Mäkelä authored
      8ec12e0d
  2. 11 Feb, 2024 1 commit
  3. 10 Feb, 2024 2 commits
    • Oleg Smirnov's avatar
      c32e59ac
    • Oleg Smirnov's avatar
      MDEV-30660 Aggregation functions fail to leverage uniqueness property · 15623c7f
      Oleg Smirnov authored
      When executing a statement of the form
        SELECT AGGR_FN(DISTINCT c1, c2,..,cn) FROM t1,
      where AGGR_FN is an aggregate function such as COUNT(), AVG() or SUM(),
      and a unique index exists on table t1 covering some or all of the
      columns (c1, c2,..,cn), the retrieved values are inherently unique.
      Consequently, the need for de-duplication imposed by the DISTINCT
      clause can be eliminated, leading to optimization of aggregation
      operations.
      This optimization applies under the following conditions:
        - only one table involved in the join (not counting const tables)
        - some arguments of the aggregate function are fields
              (not functions/subqueries)
      
      This optimization extends to queries of the form
        SELECT AGGR_FN(c1, c2,..,cn) GROUP BY cx,..cy
      when a unique index covers some or all of the columns
      (c1, c2,..cn, cx,..cy)
      15623c7f
  4. 09 Feb, 2024 1 commit
  5. 08 Feb, 2024 10 commits
    • Marko Mäkelä's avatar
      MDEV-33277 In-place upgrade causes invalid AUTO_INCREMENT values · 0381921e
      Marko Mäkelä authored
      MDEV-33308 CHECK TABLE is modifying .frm file even if --read-only
      
      As noted in commit d0ef1aaf,
      MySQL as well as older versions of MariaDB server would during
      ALTER TABLE ... IMPORT TABLESPACE write bogus values to the
      PAGE_MAX_TRX_ID field to pages of the clustered index, instead of
      letting that field remain 0.
      In commit 8777458a this field
      was repurposed for PAGE_ROOT_AUTO_INC in the clustered index root page.
      
      To avoid trouble when upgrading from MySQL or older versions of MariaDB,
      we will try to detect and correct bogus values of PAGE_ROOT_AUTO_INC
      when opening a table for the first time from the SQL layer.
      
      btr_read_autoinc_with_fallback(): Add the parameters to mysql_version,max
      to indicate the TABLE_SHARE::mysql_version of the .frm file and the
      maximum value allowed for the type of the AUTO_INCREMENT column.
      In case the table was originally created in MySQL or an older version of
      MariaDB, read also the maximum value of the AUTO_INCREMENT column from
      the table and reset the PAGE_ROOT_AUTO_INC if it is above the limit.
      
      dict_table_t::get_index(const dict_col_t &) const: Find an index that
      starts with the specified column.
      
      ha_innobase::check_for_upgrade(): Return HA_ADMIN_FAILED if InnoDB
      needs upgrading but is in read-only mode. In this way, the call to
      update_frm_version() will be skipped.
      
      row_import_autoinc(): Adjust the AUTO_INCREMENT column at the end of
      ALTER TABLE...IMPORT TABLESPACE. This refinement was suggested by
      Debarun Banerjee.
      
      The changes outside InnoDB were developed by Michael 'Monty' Widenius:
      
      Added print_check_msg() service for easy reporting of check/repair messages
      in ENGINE=Aria and ENGINE=InnoDB.
      Fixed that CHECK TABLE do not update the .frm file under --read-only.
      Added 'handler_flags' to HA_CHECK_OPT as a way for storage engines to
      store state from handler::check_for_upgrade().
      
      Reviewed by: Debarun Banerjee
      0381921e
    • Dmitry Shulga's avatar
      MDEV-15703: Crash in EXECUTE IMMEDIATE 'CREATE OR REPLACE TABLE t1 (a INT DEFAULT ?)' USING DEFAULT · e48bd474
      Dmitry Shulga authored
      This patch fixes the issue with passing the DEFAULT or IGNORE values to
      positional parameters for some kind of SQL statements to be executed
      as prepared statements.
      
      The main idea of the patch is to associate an actual value being passed
      by the USING clause with the positional parameter represented by
      the Item_param class. Such association must be performed on execution of
      UPDATE statement in PS/SP mode. Other corner cases that results in
      server crash is on handling CREATE TABLE when positional parameter
      placed after the DEFAULT clause or CALL statement and passing either
      the value DEFAULT or IGNORE as an actual value for the positional parameter.
      This case is fixed by checking whether an error is set in diagnostics
      area at the function pack_vcols() on return from the function pack_expression()
      e48bd474
    • Dmitry Shulga's avatar
      MDEV-15703: Crash in EXECUTE IMMEDIATE 'CREATE OR REPLACE TABLE t1 (a INT... · 6b2cd786
      Dmitry Shulga authored
      MDEV-15703: Crash in EXECUTE IMMEDIATE 'CREATE OR REPLACE TABLE t1 (a INT DEFAULT ?)' USING DEFAULT, UBSAN runtime error: member call on null pointer of type 'struct TABLE_LIST' in Item_param::save_in_field
      
      This is the prerequisite patch to refactor the method
        Item_default_value::fix_fields.
      The former implementation of this method was extracted and placed
      into the standalone function make_default_field() and the method
      Item_default_value::tie_field(). The motivation for this modification
      is upcoming changes for core implementation of the task MDEV-15703
      since these functions will be used from several places within
      the source code.
      6b2cd786
    • Marko Mäkelä's avatar
      MDEV-33400 Adaptive hash index corruption after DISCARD TABLESPACE · 85db5347
      Marko Mäkelä authored
      row_discard_tablespace(): Do not invoke dict_index_t::clear_instant_alter()
      because that would corrupt any adaptive hash index entries in the table.
      
      row_import_for_mysql(): Invoke dict_index_t::clear_instant_alter()
      after detaching any adaptive hash index entries.
      85db5347
    • Daniel Bartholomew's avatar
      bump the VERSION · 23101304
      Daniel Bartholomew authored
      23101304
    • Daniel Black's avatar
      MDEV-4827 mysqldump --dump-slave=2 --master-data=2 doesn't record both · 915d9514
      Daniel Black authored
      Recording both is useful on a replication relay when the backup
      can be used to replace the server, or ack as a new replica to the
      server.
      
      If an option=2, commented is selected, allow the alternate option
      to exist.
      
      This still disables --dump-slave=1 --master-data=1 as having the
      a CHANGE MASTER TO and START SLAVE on different positions would be
      confusing and dangerious to the try to execute the output. The
      previous behaviour of silently disabling --master-data occurs in
      this case.
      
      The commented code related to --dump-slave/--master-data is greatly
      expanded for human consumption.
      
      A redundant opt_slave_data= 0 was removed from get_opts. If
      --dump-slave=1 or 2, then the only possible value of --master-data
      is a valid one.
      
      Re-order to preference gtid based replication.
      
      Based of code from Elena Stepanova.
      
      Review by: Brandon Nesterenko and Anel Husakovic
      915d9514
    • Daniel Black's avatar
      MDEV-4827: prelude - additional gtid/no-gtid tests for mysqldump · f7adf129
      Daniel Black authored
      This will make it easier to show changes.
      f7adf129
    • mariadb-DebarunBanerjee's avatar
      MDEV-33274 The test encryption.innodb-redo-nokeys often fails · 5e704706
      mariadb-DebarunBanerjee authored
      If we fail to open a tablespace while looking for FILE_CHECKPOINT, we
      set the corruption flag. Specifically, if encryption key is missing, we
      would not be able to open an encrypted tablespace and the flag could be
      set. We miss checking for this flag and report "Missing FILE_CHECKPOINT"
      
      Address review comment to improve the test. Flush pages before starting
      no-checkpoint block. It should improve the number of cases where the
      test is skipped because some intermediate checkpoint is triggered.
      5e704706
    • Daniel Lenski's avatar
      Fix inconsistent definition of... · 6e406bb6
      Daniel Lenski authored
      Fix inconsistent definition of PERFORMANCE_SCHEMA.REPLICATION_APPLIER_STATUS.COUNT_TRANSACTIONS_RETRIES column
      
      This column (`COUNT_TRANSACTIONS_RETRIES`) is defined as `BIGINT UNSIGNED`
      (64-bit unsigned integer) in the user-visible SQL definition:
      https://github.com/MariaDB/server/blob/182ff21ace34ea4f00fb5b66689b172323d91f99/storage/perfschema/table_replication_applier_status.cc#L66
      
          "COUNT_TRANSACTIONS_RETRIES BIGINT unsigned not null comment 'The number of retries that were made because the replication SQL thread failed to apply a transaction.',"
      
      And its value is internally set/updated using the `set_field_ulonglong`
      function:
      https://github.com/MariaDB/server/blob/182ff21ace34ea4f00fb5b66689b172323d91f99/storage/perfschema/table_replication_applier_status.cc#L231-L233
      
          case 3: /* total number of times transactions were retried */
            set_field_ulonglong(f, m_row.count_transactions_retries);
            break;
      
      … but the structure where it is stored allocates only `ulong` for it:
      https://github.com/MariaDB/server/blob/182ff21ace34ea4f00fb5b66689b172323d91f99/storage/perfschema/table_replication_applier_status.h#L62
      
          ulong count_transactions_retries;
      
      As a result of this inconsistency:
      
      1. On any platform where `ulong` is `uint32_t` and `ulonglong` is `uint64_t`,
         setting this value would corrupt the 4 bytes of memory *following* the 4
         bytes actually allocated for it.
      
         Likely this problem was never noticed because this is the final element in
         the structure, and the structure is padded by the compiler to prevent
         memory corruption errors.
      
      2. On any BIG-ENDIAN platform where `ulong` is `uint32_t` and `ulonglong`
         is `uint64_t`, reading back the value of this column will result in
         total garbage.
      
         Likely this problem was never noticed because MariaDB has not been
         tested on 32-bit big-endian platforms.
      
      In order not to affect the user-visible/SQL definition of this column, the
      correct way to fix this issue is to change it to `ulonglong` in the
      structure definition.  See
      https://github.com/MariaDB/server/pull/2763/files#r1329110832 for the
      original identification and discussion of this issue.
      
      All new code of the whole pull request, including one or several files
      that are either new files or modified ones, are contributed under the BSD-new
      license. I am contributing on behalf of my employer Amazon Web Services
      6e406bb6
    • Robin Newhouse's avatar
      Fix ninja build for cracklib_password_check · 68c0f6d8
      Robin Newhouse authored
      As was done in dc771113 for `support-files/CMakeLists.txt`
      Do not rely on existence of `CMakeFiles/${target}.dir` directory
      existence. It is not there for custom targets in Ninja build.
      
      This regression was introduced in #1131 which likely copied the pattern
      from e79e8406 before that regression was addressed in dc771113.
      
      All new code of the whole pull request, including one or several files
      that are either new files or modified ones, are contributed under the
      BSD-new license. I am contributing on behalf of my employer Amazon Web
      Services.
      68c0f6d8
  6. 07 Feb, 2024 5 commits
    • mariadb-DebarunBanerjee's avatar
      MDEV-33023 Crash in mariadb-backup --prepare --export after --prepare · fb9da7f7
      mariadb-DebarunBanerjee authored
      mariadb-backup with --prepare option could result in empty redo log
      file. When --prepare is followed by --prepare --export, we exit early
      in srv_start function without opening the ibdata1 tablespace. Later
      while trying to read rollback segment header page, we hit the debug
      assert which claims that the system space should already have been
      opened.
      
      There are two assert cases here.
      
      Issue-1: System tablespace object is not there in fil space hash i.e.
      srv_sys_space.open_or_create() is not called.
      
      Issue-2: The system tablespace data file ibdata1 is not opened i.e.
      fil_system.sys_space->open() is not called.
      
      Fix: For empty redo log and restore operation, open system tablespace
      before returning.
      fb9da7f7
    • Vlad Lesin's avatar
      MDEV-33004 innodb.cursor-restore-locking test fails · f5373db8
      Vlad Lesin authored
      THE FIX MUST NOT BE MERGED TO 10.6+, BECAUSE 10.6+ IS NOT AFFECTED!
      
      The test is waiting for delete-marked record purging. But this does not
      happen under the following conditions:
      
      1. "START TRANSACTION WITH CONSISTENT SNAPSHOT" - is active, has not
      been rolled back yet
      2. "DELETE FROM t WHERE b = 20 # trx_1" - is committed
      3. "INSERT INTO t VALUES(10, 20) # trx_2" - hanging on
      "ib_after_row_insert" sync point, waiting for "first_ins_cont" signal
      4. "DELETE FROM t WHERE b = 20 # trx_3" - blocked on delete-marked by
      trx_1 record, waiting for trx_2
      5. connection "default" is waiting on
      'now WAIT_FOR row_purge_del_mark_finished'
      
      purge_coordinator_callback_low() sets
      
      purge_state.m_history_length= srv_do_purge(&n_total_purged);
      
      even if nothing was purged, like in our case. Nothing was purged because
      transaction with consistent snapshot was still alive during purging
      procedure.
      
      Then purge_coordinator_timer_callback() does not wake purge thread if
      the following condition is true:
      
      purge_state.m_history_length == trx_sys.rseg_history_len
      
      The above condition is true for our case, because we are waiting for
      delete-marked record purging, and trx_sys.rseg_history_len does not
      grow.
      
      Only 10.5 is affected, because there is no such condition in 10.6, i.e.
      purge thread is woken up even if history size was not changed during
      purge coordinator thread suspending.
      
      The easiest way to fix it is just to remove the test from 10.5.
      f5373db8
    • Thirunarayanan Balathandayuthapani's avatar
      MDEV-33341 innodb.undo_space_dblwr test case fails with Unknown Storage Engine InnoDB · c31b1ee2
      Thirunarayanan Balathandayuthapani authored
      - Failed to reset the innodb_fil_make_page_dirty_debug variable in
      innodb_saved_page_number_debug_basic test case.
      c31b1ee2
    • Yuchen Pei's avatar
    • Daniel Black's avatar
      MDEV-33397: Innodb include OS error information when failing to write to iblogfileX · e06b159f
      Daniel Black authored
      Without an OS error it could one of the many errors from write.
      e06b159f
  7. 06 Feb, 2024 4 commits
    • Oleksandr Byelkin's avatar
      8e731499
    • Oleksandr Byelkin's avatar
      8adc7599
    • Daniel Bartholomew's avatar
      bump the VERSION · ec7a80db
      Daniel Bartholomew authored
      ec7a80db
    • mariadb-DebarunBanerjee's avatar
      MDEV-18288 Transportable Tablespaces leave AUTO_INCREMENT in mismatched state,... · 66bb229e
      mariadb-DebarunBanerjee authored
      MDEV-18288 Transportable Tablespaces leave AUTO_INCREMENT in mismatched state, causing INSERT errors in newly imported tables when .cfg is not used.
      
      During import, if cfg file is not specified, we don't update the autoinc
      field in innodb dictionary object dict_table_t. The next insert tries to
      insert from the starting position of auto increment and fails.
      
      It can be observed that the issue is resolved once server is restarted
      as the persistent value is read correctly from PAGE_ROOT_AUTO_INC from
      index root page. The patch fixes the issue by reading the the auto
      increment value directly from PAGE_ROOT_AUTO_INC during import if cfg
      file is not specified.
      
      Test Fix:
      
      1. import_bugs.test: Embedded mode warning has absolute path. Regular
      expression replacement in test.
      
      2. full_crc32_import.test: Table level auto increment mismatch after
      import. It was using the auto increment data from the table prior to
      discard and import which is not right. This value has cached auto
      increment value higher than the actual inserted value and value stored
      in PAGE_ROOT_AUTO_INC. Updated the result file and added validation for
      checking the maximum value of auto increment column.
      66bb229e
  8. 05 Feb, 2024 1 commit
    • Otto Kekäläinen's avatar
      Fix commit 179424db: No test file or result files should be executable · 3812e1c9
      Otto Kekäläinen authored
      In commit 179424db the file lowercase_table2.result was made executable
      for no known reason, most likely just a mistake. Test result files
      definitely should not be executable.
      
      All new code of the whole pull request, including one or several files
      that are either new files or modified ones, are contributed under the
      BSD-new license. I am contributing on behalf of my employer Amazon Web
      Services, Inc.
      3812e1c9
  9. 04 Feb, 2024 1 commit
    • Igor Babaev's avatar
      MDEV-31361 Wrong result on 2nd execution of PS for query with derived table · 6fadbf8e
      Igor Babaev authored
      This bug led to wrong result sets returned by the second execution of
      prepared statements from selects using mergeable derived tables pushed
      into external engine. Such derived tables are always materialized. The
      decision that they have to be materialized is taken late in the function
      mysql_derived_optimized(). For regular derived tables this decision is
      usually taken at the prepare phase. However in some cases for some derived
      tables this decision is made in mysql_derived_optimized() too. It can be
      seen in the code of mysql_derived_fill() that for such a derived table it's
      critical to change its translation table to tune it to the fields of the
      temporary table used for materialization of the derived table and this
      must be done after each refill of the derived table. The same actions are
      needed for derived tables pushed into external engines.
      
      Approved by Oleksandr Byelkin <sanja@mariadb.com>
      6fadbf8e
  10. 02 Feb, 2024 3 commits
    • Sergei Petrunia's avatar
      Make innodb_ext_key test stable: use innodb_stable_estimates.inc · 9d0b79c5
      Sergei Petrunia authored
      @@ -314,7 +314,7 @@
       select straight_join * from t0, part ignore index (primary)
       where p_partkey=t0.a and p_size=1;
       id	select_type	table	type	possible_keys	key	key_len	ref	rows	Extra
      -1	SIMPLE	t0	ALL	NULL	NULL	NULL	NULL	5	Using where
      +1	SIMPLE	t0	ALL	NULL	NULL	NULL	NULL	6	Using where
       1	SIMPLE	part	eq_ref	i_p_size	i_p_size	9	const,dbt3_s001.t0.a	1
      9d0b79c5
    • Sergei Petrunia's avatar
      MDEV-33314: Crash in calculate_cond_selectivity_for_table() with many columns · 5972f5c2
      Sergei Petrunia authored
      Variant#3: moved the logic out of create_key_parts_for_pseudo_indexes
      
      Range Analyzer (get_mm_tree functions) can only process up to MAX_KEY=64
      indexes. The problem was that calculate_cond_selectivity_for_table used
      it to estimate selectivities for columns, and since a table can
      have > MAX_KEY columns, would invoke Range Analyzer with more than MAX_KEY
      "pseudo-indexes".
      
      Fixed by making calculate_cond_selectivity_for_table() to run Range
      Analyzer with at most MAX_KEY pseudo-indexes. If there are more
      columns to process, Range Analyzer will be invoked multiple times.
      
      Also made this change:
      -    param.real_keynr[0]= 0;
      +    MEM_UNDEFINED(&param.real_keynr, sizeof(param.real_keynr));
      
      Range Analyzer should have no use on real_keynr when it is run with
      pseudo-indexes.
      5972f5c2
    • Alexander Barkov's avatar
      MDEV-32893 mariadb-backup is not considering O/S user when --user option is omitted · 78662dda
      Alexander Barkov authored
      mariadb-backup:
      
      Adding a function get_os_user() to detect the OS user name
      if the user name is not specified, to make mariadb-backup:
      - work like MariaDB client tools work
      - match its --help page, which says:
      
        -u, --user=name This option specifies the username used when
        connecting to the server, if that's not the current user.
      78662dda
  11. 01 Feb, 2024 3 commits
  12. 31 Jan, 2024 6 commits
    • Sergei Golubchik's avatar
      Merge branch '10.4' into 10.5 · 01f6abd1
      Sergei Golubchik authored
      01f6abd1
    • Sergei Golubchik's avatar
      funcs_1.innodb_views times out in --ps · 46e3a765
      Sergei Golubchik authored
      46e3a765
    • Nikita Malyavin's avatar
      MDEV-25370 Update for portion changes autoincrement key in bi-temp table · 68c1fbfc
      Nikita Malyavin authored
      According to the standard, the autoincrement column (i.e. *identity
      column*) should be advanced each insert implicitly made by
      UPDATE/DELETE ... FOR PORTION.
      
      This is very unconvenient use in several notable cases. Concider a
      WITHOUT OVERLAPS key with an autoinc column:
      id int auto_increment, unique(id, p without overlaps)
      
      An update or delete with FOR PORTION creates a sense that id will remain
      unchanged in such case.
      
      The standard's IDENTITY reminds MariaDB's AUTO_INCREMENT, however
      the generation rules differ in many ways. For example, there's also a
      notion autoincrement index, which is bound to the autoincrement field.
      
      We will define our own generation rule for the PORTION OF operations
      involving AUTO_INCREMENT:
      * If an autoincrement index contains WITHOUT OVERLAPS specification, then
      a new value should not be generated, otherwise it should.
      
      Apart from WITHOUT OVERLAPS there is also another notable case, referred
      by the reporter - a unique key that has an autoincrement column and a field
      from the period specification:
        id int auto_increment, unique(id, s), period for p(s, e)
      
      for this case, no exception is made, and the autoincrementing rules will be
      proceeded accordung to the standard (i.e. the value will be advanced on
      implicit inserts).
      68c1fbfc
    • Sergei Golubchik's avatar
      regression introduced by MDEV-14448 · e5147c81
      Sergei Golubchik authored
      e5147c81
    • Sergei Golubchik's avatar
      MDEV-33343 spider.mdev_28739_simple fails in buildbot · d1744ee7
      Sergei Golubchik authored
      test disabled, until fixed
      d1744ee7
    • Thirunarayanan Balathandayuthapani's avatar
      MDEV-33341 innodb.undo_space_dblwr test case fails with Unknown Storage Engine InnoDB · 21f18bd9
      Thirunarayanan Balathandayuthapani authored
      Reason:
      ======
      undo_space_dblwr test case fails if the first page of undo
      tablespace is not flushed before restart the server. While
      restarting the server, InnoDB fails to detect the first
      page of undo tablespace from doublewrite buffer.
      
      Fix:
      ===
      Use "ib_log_checkpoint_avoid_hard" debug sync point
      to avoid checkpoint and make sure to flush the
      dirtied page before killing the server.
      
      innodb_make_page_dirty(): Fails to set
      srv_fil_make_page_dirty_debug variable.
      21f18bd9