1. 06 Jun, 2020 3 commits
  2. 05 Jun, 2020 37 commits
    • Marko Mäkelä's avatar
      Merge 10.4 into 10.5 · 6877ef9a
      Marko Mäkelä authored
      6877ef9a
    • Marko Mäkelä's avatar
    • Vladislav Vaintroub's avatar
      Merge branch '10.3' into 10.4 · 9d479e25
      Vladislav Vaintroub authored
      9d479e25
    • Vladislav Vaintroub's avatar
    • Vladislav Vaintroub's avatar
      Fix appveyor build. · 15cdcb2a
      Vladislav Vaintroub authored
      15cdcb2a
    • Marko Mäkelä's avatar
      Merge 10.3 into 10.4 · 68d9d512
      Marko Mäkelä authored
      68d9d512
    • Marko Mäkelä's avatar
      286e52e9
    • Vladislav Vaintroub's avatar
      Reduce CPU usage in srv_purge_shutdown. · d642c5b8
      Vladislav Vaintroub authored
      Polling for srv_purge_should_exit() once every millisecond is enough.
      d642c5b8
    • Marko Mäkelä's avatar
      Merge 10.2 into 10.3 · 680463a8
      Marko Mäkelä authored
      680463a8
    • Thirunarayanan Balathandayuthapani's avatar
      MDEV-21282 Assertion 'mariadb_table' failed in gcol.innodb_virtual_debug_purge · de1dbb71
      Thirunarayanan Balathandayuthapani authored
      - commit ea37b144 (MDEV-16678) caused
      a regression. when purge thread tries to open the table for virtual
      column computation, there is no need to acquire MDL for the table.
      Because purge thread already hold MDL for the table
      de1dbb71
    • Marko Mäkelä's avatar
      MDEV-22769 Shutdown hang or crash due to XA breaking locks · efc70da5
      Marko Mäkelä authored
      The background drop table queue in InnoDB is a work-around for
      cases where the SQL layer is requesting DDL on tables on which
      transactional locks exist.
      
      One such case are XA transactions. Our test case exploits the
      fact that the recovery of XA PREPARE transactions will
      only resurrect InnoDB table locks, but not MDL that should
      block any concurrent DDL.
      
      srv_shutdown_t: Introduce the srv_shutdown_state=SRV_SHUTDOWN_INITIATED
      for the initial part of shutdown, to wait for the background drop
      table queue to be emptied.
      
      srv_shutdown_bg_undo_sources(): Assign
      srv_shutdown_state=SRV_SHUTDOWN_INITIATED
      before waiting for the background drop table queue to be emptied.
      
      row_drop_tables_for_mysql_in_background(): On slow shutdown, if
      no active transactions exist (excluding ones that are in
      XA PREPARE state), skip any tables on which locks exist.
      
      row_drop_table_for_mysql(): Do not unnecessarily attempt to
      drop InnoDB persistent statistics for tables that have
      already been added to the background drop table queue.
      
      row_mysql_close(): Relax an assertion, and free all memory
      even if innodb_force_recovery=2 would prevent the background
      drop table queue from being emptied.
      efc70da5
    • Marko Mäkelä's avatar
      MDEV-22790 Race between btr_page_mtr_lock() dropping AHI on the same block · 138c11cc
      Marko Mäkelä authored
      This race condition was introduced by
      commit ad6171b9 (MDEV-22456).
      
      In the observed case, two threads were executing
      btr_search_drop_page_hash_index() on the same block,
      to free a stale entry that was attached to a dropped index.
      Both threads were holding an S latch on the block.
      
      We must prevent the double-free of block->index by holding
      block->lock in exclusive mode.
      
      btr_search_guess_on_hash(): Do not invoke
      btr_search_drop_page_hash_index(block) to get rid of
      stale entries, because we are not necessarily holding
      an exclusive block->lock here.
      
      buf_defer_drop_ahi(): New function, to safely drop stale
      entries in buf_page_mtr_lock(). We will skip the call to
      btr_search_drop_page_hash_index(block) when only requesting
      bufferfixing (no page latch), because in that case, we should
      not be accessing the adaptive hash index, and we might get
      a deadlock if we acquired the page latch.
      138c11cc
    • Marko Mäkelä's avatar
      MDEV-22646: Fix a memory leak · 3677dd5c
      Marko Mäkelä authored
      btr_search_sys_free(): Free btr_search_sys->hash_tables.
      
      The leak was introduced in commit ad2bf112.
      3677dd5c
    • Vladislav Vaintroub's avatar
      Windows, build tweak. · 1828196f
      Vladislav Vaintroub authored
      Allow targets for building "noinstall" zip, and debuginfo zip.
      1828196f
    • Kentoku SHIBA's avatar
    • Kentoku SHIBA's avatar
      MENT-787 Server from bb-10.5-MENT-30 crashes upon Spider installation · b3250ab3
      Kentoku SHIBA authored
      It looks buffer over flow of spider_unique_id_buf. It requires to analyze on reproducing environment, but I extend this first.
      b3250ab3
    • Kentoku SHIBA's avatar
      Change Spider's plugin maturity to BETA · 0b7fe26e
      Kentoku SHIBA authored
      0b7fe26e
    • Kentoku SHIBA's avatar
      Fix issue caused by using spider_bgs_mode = 2 when Spider use limit_mode = 1... · a756d547
      Kentoku SHIBA authored
      Fix issue caused by using spider_bgs_mode = 2 when Spider use limit_mode = 1 internally for data nodes.
      a756d547
    • Kentoku SHIBA's avatar
    • Kentoku SHIBA's avatar
      bbb1140d
    • Kentoku SHIBA's avatar
      fix evaluating bitmap issue in spider · f16633c1
      Kentoku SHIBA authored
      f16633c1
    • Kentoku SHIBA's avatar
    • Kentoku SHIBA's avatar
      add a table parameter "dsn" to Spider · 793b84b8
      Kentoku SHIBA authored
      793b84b8
    • Kentoku SHIBA's avatar
      fix build errors on windows environments · 932baa94
      Kentoku SHIBA authored
      932baa94
    • Kentoku SHIBA's avatar
      6c3180be
    • Kentoku SHIBA's avatar
      prepare for adding new connectors for Spider · 94861b83
      Kentoku SHIBA authored
      Conflicts:
      	storage/spider/spd_conn.cc
      94861b83
    • Kentoku SHIBA's avatar
      MDEV-6268 SPIDER table with no COMMENT clause causes queries to wait forever · 23c8adda
      Kentoku SHIBA authored
      Add looping check
      
      Conflicts:
      	sql/table.h
      23c8adda
    • Kentoku SHIBA's avatar
      272625d9
    • Kentoku SHIBA's avatar
      c34deb4c
    • Kentoku SHIBA's avatar
      fix divided lock table issue of Spider · d3a6ed05
      Kentoku SHIBA authored
      d3a6ed05
    • Kentoku SHIBA's avatar
      use ifdef for unused attributes for Spider · 418f1611
      Kentoku SHIBA authored
      418f1611
    • Kentoku SHIBA's avatar
      MDEV-19002 Spider performance optimization with partition · e954d9de
      Kentoku SHIBA authored
      Change the following function for batch call instead of each partition
      - store_lock
      - external_lock
      - start_stmt
      - extra
      - cond_push
      - info_push
      - top_table
      e954d9de
    • Nikita Malyavin's avatar
      MDEV-22753 Server crashes upon INSERT into versioned partitioned table with WITHOUT OVERLAPS · 8e6e5ace
      Nikita Malyavin authored
      Add `append_system_key_parts` call inside `fast_alter_partition_table` during new partition creation.
      8e6e5ace
    • Nikita Malyavin's avatar
      MDEV-22599 WITHOUT OVERLAPS does not work with prefix indexes · 35d327fd
      Nikita Malyavin authored
      cmp_max is used instead of cmp to compare key_parts
      35d327fd
    • Nikita Malyavin's avatar
      MDEV-22434 UPDATE on RocksDB table with WITHOUT OVERLAPS fails · 0c595bde
      Nikita Malyavin authored
      Insert worked incorrect as well. RocksDB used table->record[0] internally to store some
      intermediate results for key conversion, during index searching among other operations.
      So table->record[0] is spoiled during ha_rnd_index_map in ha_check_overlaps, so in turn
      the broken record data was inserted.
      
      The fix is to store RocksDB intermediate result in its own buffer instead of table->record[0].
      
      `rocksdb` MTR suite is is checked and runs fine.
      No need for additional tests. The existing overlaps.test covers the case completely.
      However, I am not going to add anything related to rocksdb to suite, to keep it away
      from additional dependencies.
      
      To run tests with RocksDB engine, one can add following to engines.combinations:
      [rocksdb]
      plugin-load=$HA_ROCKSDB_SO
      default-storage-engine=rocksdb
      rocksdb
      0c595bde
    • Nikita Malyavin's avatar
    • Marko Mäkelä's avatar
      MDEV-15053 Reduce buf_pool_t::mutex contention · b1ab211d
      Marko Mäkelä authored
      User-visible changes: The INFORMATION_SCHEMA views INNODB_BUFFER_PAGE
      and INNODB_BUFFER_PAGE_LRU will report a dummy value FLUSH_TYPE=0
      and will no longer report the PAGE_STATE value READY_FOR_USE.
      
      We will remove some fields from buf_page_t and move much code to
      member functions of buf_pool_t and buf_page_t, so that the access
      rules of data members can be enforced consistently.
      
      Evicting or adding pages in buf_pool.LRU will remain covered by
      buf_pool.mutex.
      
      Evicting or adding pages in buf_pool.page_hash will remain
      covered by both buf_pool.mutex and the buf_pool.page_hash X-latch.
      
      After this fix, buf_pool.page_hash lookups can entirely
      avoid acquiring buf_pool.mutex, only relying on
      buf_pool.hash_lock_get() S-latch.
      
      Similarly, buf_flush_check_neighbors() can will rely solely on
      buf_pool.mutex, no buf_pool.page_hash latch at all.
      
      The buf_pool.mutex is rather contended in I/O heavy benchmarks,
      especially when the workload does not fit in the buffer pool.
      
      The first attempt to alleviate the contention was the
      buf_pool_t::mutex split in
      commit 4ed7082e
      which introduced buf_block_t::mutex, which we are now removing.
      
      Later, multiple instances of buf_pool_t were introduced
      in commit c18084f7
      and recently removed by us in
      commit 1a6f708e (MDEV-15058).
      
      UNIV_BUF_DEBUG: Remove. This option to enable some buffer pool
      related debugging in otherwise non-debug builds has not been used
      for years. Instead, we have been using UNIV_DEBUG, which is enabled
      in CMAKE_BUILD_TYPE=Debug.
      
      buf_block_t::mutex, buf_pool_t::zip_mutex: Remove. We can mainly rely on
      std::atomic and the buf_pool.page_hash latches, and in some cases
      depend on buf_pool.mutex or buf_pool.flush_list_mutex just like before.
      We must always release buf_block_t::lock before invoking
      unfix() or io_unfix(), to prevent a glitch where a block that was
      added to the buf_pool.free list would apper X-latched. See
      commit c5883deb how this glitch
      was finally caught in a debug environment.
      
      We move some buf_pool_t::page_hash specific code from the
      ha and hash modules to buf_pool, for improved readability.
      
      buf_pool_t::close(): Assert that all blocks are clean, except
      on aborted startup or crash-like shutdown.
      
      buf_pool_t::validate(): No longer attempt to validate
      n_flush[] against the number of BUF_IO_WRITE fixed blocks,
      because buf_page_t::flush_type no longer exists.
      
      buf_pool_t::watch_set(): Replaces buf_pool_watch_set().
      Reduce mutex contention by separating the buf_pool.watch[]
      allocation and the insert into buf_pool.page_hash.
      
      buf_pool_t::page_hash_lock<bool exclusive>(): Acquire a
      buf_pool.page_hash latch.
      Replaces and extends buf_page_hash_lock_s_confirm()
      and buf_page_hash_lock_x_confirm().
      
      buf_pool_t::READ_AHEAD_PAGES: Renamed from BUF_READ_AHEAD_PAGES.
      
      buf_pool_t::curr_size, old_size, read_ahead_area, n_pend_reads:
      Use Atomic_counter.
      
      buf_pool_t::running_out(): Replaces buf_LRU_buf_pool_running_out().
      
      buf_pool_t::LRU_remove(): Remove a block from the LRU list
      and return its predecessor. Incorporates buf_LRU_adjust_hp(),
      which was removed.
      
      buf_page_get_gen(): Remove a redundant call of fsp_is_system_temporary(),
      for mode == BUF_GET_IF_IN_POOL_OR_WATCH, which is only used by
      BTR_DELETE_OP (purge), which is never invoked on temporary tables.
      
      buf_free_from_unzip_LRU_list_batch(): Avoid redundant assignments.
      
      buf_LRU_free_from_unzip_LRU_list(): Simplify the loop condition.
      
      buf_LRU_free_page(): Clarify the function comment.
      
      buf_flush_check_neighbor(), buf_flush_check_neighbors():
      Rewrite the construction of the page hash range. We will hold
      the buf_pool.mutex for up to buf_pool.read_ahead_area (at most 64)
      consecutive lookups of buf_pool.page_hash.
      
      buf_flush_page_and_try_neighbors(): Remove.
      Merge to its only callers, and remove redundant operations in
      buf_flush_LRU_list_batch().
      
      buf_read_ahead_random(), buf_read_ahead_linear(): Rewrite.
      Do not acquire buf_pool.mutex, and iterate directly with page_id_t.
      
      ut_2_power_up(): Remove. my_round_up_to_next_power() is inlined
      and avoids any loops.
      
      fil_page_get_prev(), fil_page_get_next(), fil_addr_is_null(): Remove.
      
      buf_flush_page(): Add a fil_space_t* parameter. Minimize the
      buf_pool.mutex hold time. buf_pool.n_flush[] is no longer updated
      atomically with the io_fix, and we will protect most buf_block_t
      fields with buf_block_t::lock. The function
      buf_flush_write_block_low() is removed and merged here.
      
      buf_page_init_for_read(): Use static linkage. Initialize the newly
      allocated block and acquire the exclusive buf_block_t::lock while not
      holding any mutex.
      
      IORequest::IORequest(): Remove the body. We only need to invoke
      set_punch_hole() in buf_flush_page() and nowhere else.
      
      buf_page_t::flush_type: Remove. Replaced by IORequest::flush_type.
      This field is only used during a fil_io() call.
      That function already takes IORequest as a parameter, so we had
      better introduce  for the rarely changing field.
      
      buf_block_t::init(): Replaces buf_page_init().
      
      buf_page_t::init(): Replaces buf_page_init_low().
      
      buf_block_t::initialise(): Initialise many fields, but
      keep the buf_page_t::state(). Both buf_pool_t::validate() and
      buf_page_optimistic_get() requires that buf_page_t::in_file()
      be protected atomically with buf_page_t::in_page_hash
      and buf_page_t::in_LRU_list.
      
      buf_page_optimistic_get(): Now that buf_block_t::mutex
      no longer exists, we must check buf_page_t::io_fix()
      after acquiring the buf_pool.page_hash lock, to detect
      whether buf_page_init_for_read() has been initiated.
      We will also check the io_fix() before acquiring hash_lock
      in order to avoid unnecessary computation.
      The field buf_block_t::modify_clock (protected by buf_block_t::lock)
      allows buf_page_optimistic_get() to validate the block.
      
      buf_page_t::real_size: Remove. It was only used while flushing
      pages of page_compressed tables.
      
      buf_page_encrypt(): Add an output parameter that allows us ot eliminate
      buf_page_t::real_size. Replace a condition with debug assertion.
      
      buf_page_should_punch_hole(): Remove.
      
      buf_dblwr_t::add_to_batch(): Replaces buf_dblwr_add_to_batch().
      Add the parameter size (to replace buf_page_t::real_size).
      
      buf_dblwr_t::write_single_page(): Replaces buf_dblwr_write_single_page().
      Add the parameter size (to replace buf_page_t::real_size).
      
      fil_system_t::detach(): Replaces fil_space_detach().
      Ensure that fil_validate() will not be violated even if
      fil_system.mutex is released and reacquired.
      
      fil_node_t::complete_io(): Renamed from fil_node_complete_io().
      
      fil_node_t::close_to_free(): Replaces fil_node_close_to_free().
      Avoid invoking fil_node_t::close() because fil_system.n_open
      has already been decremented in fil_space_t::detach().
      
      BUF_BLOCK_READY_FOR_USE: Remove. Directly use BUF_BLOCK_MEMORY.
      
      BUF_BLOCK_ZIP_DIRTY: Remove. Directly use BUF_BLOCK_ZIP_PAGE,
      and distinguish dirty pages by buf_page_t::oldest_modification().
      
      BUF_BLOCK_POOL_WATCH: Remove. Use BUF_BLOCK_NOT_USED instead.
      This state was only being used for buf_page_t that are in
      buf_pool.watch.
      
      buf_pool_t::watch[]: Remove pointer indirection.
      
      buf_page_t::in_flush_list: Remove. It was set if and only if
      buf_page_t::oldest_modification() is nonzero.
      
      buf_page_decrypt_after_read(), buf_corrupt_page_release(),
      buf_page_check_corrupt(): Change the const fil_space_t* parameter
      to const fil_node_t& so that we can report the correct file name.
      
      buf_page_monitor(): Declare as an ATTRIBUTE_COLD global function.
      
      buf_page_io_complete(): Split to buf_page_read_complete() and
      buf_page_write_complete().
      
      buf_dblwr_t::in_use: Remove.
      
      buf_dblwr_t::buf_block_array: Add IORequest::flush_t.
      
      buf_dblwr_sync_datafiles(): Remove. It was a useless wrapper of
      os_aio_wait_until_no_pending_writes().
      
      buf_flush_write_complete(): Declare static, not global.
      Add the parameter IORequest::flush_t.
      
      buf_flush_freed_page(): Simplify the code.
      
      recv_sys_t::flush_lru: Renamed from flush_type and changed to bool.
      
      fil_read(), fil_write(): Replaced with direct use of fil_io().
      
      fil_buffering_disabled(): Remove. Check srv_file_flush_method directly.
      
      fil_mutex_enter_and_prepare_for_io(): Return the resolved
      fil_space_t* to avoid a duplicated lookup in the caller.
      
      fil_report_invalid_page_access(): Clean up the parameters.
      
      fil_io(): Return fil_io_t, which comprises fil_node_t and error code.
      Always invoke fil_space_t::acquire_for_io() and let either the
      sync=true caller or fil_aio_callback() invoke
      fil_space_t::release_for_io().
      
      fil_aio_callback(): Rewrite to replace buf_page_io_complete().
      
      fil_check_pending_operations(): Remove a parameter, and remove some
      redundant lookups.
      
      fil_node_close_to_free(): Wait for n_pending==0. Because we no longer
      do an extra lookup of the tablespace between fil_io() and the
      completion of the operation, we must give fil_node_t::complete_io() a
      chance to decrement the counter.
      
      fil_close_tablespace(): Remove unused parameter trx, and document
      that this is only invoked during the error handling of IMPORT TABLESPACE.
      
      row_import_discard_changes(): Merged with the only caller,
      row_import_cleanup(). Do not lock up the data dictionary while
      invoking fil_close_tablespace().
      
      logs_empty_and_mark_files_at_shutdown(): Do not invoke
      fil_close_all_files(), to avoid a !needs_flush assertion failure
      on fil_node_t::close().
      
      innodb_shutdown(): Invoke os_aio_free() before fil_close_all_files().
      
      fil_close_all_files(): Invoke fil_flush_file_spaces()
      to ensure proper durability.
      
      thread_pool::unbind(): Fix a crash that would occur on Windows
      after srv_thread_pool->disable_aio() and os_file_close().
      This fix was submitted by Vladislav Vaintroub.
      
      Thanks to Matthias Leich and Axel Schwenke for extensive testing,
      Vladislav Vaintroub for helpful comments, and Eugene Kosov for a review.
      b1ab211d