1. 04 Jul, 2021 1 commit
  2. 03 Jul, 2021 4 commits
    • Marko Mäkelä's avatar
      fixup 0a67b15a · 789a2a36
      Marko Mäkelä authored
      trx_t::free(): Declare xid as fully initialized in order to
      avoid tripping the subsequent MEM_CHECK_DEFINED
      (in WITH_MSAN and WITH_VALGRIND builds).
      789a2a36
    • Marko Mäkelä's avatar
      Merge 10.5 into 10.6 · b797f217
      Marko Mäkelä authored
      b797f217
    • Marko Mäkelä's avatar
      MDEV-26017 fixup · f0f47cbc
      Marko Mäkelä authored
      buf_flush_relocate_on_flush_list(): Use dpage->physical_size()
      because bpage->zip.ssize may already have been zeroed in
      page_zip_set_size() invoked by buf_pool_t::realloc().
      
      This would cause occasional failures of the test
      innodb.innodb_buffer_pool_resize, which creates a
      ROW_FORMAT=COMPRESSED table.
      f0f47cbc
    • Marko Mäkelä's avatar
      MDEV-26033: Race condition between buf_pool.page_hash and resize() · bd5a6403
      Marko Mäkelä authored
      The replacement of buf_pool.page_hash with a different type of
      hash table in commit 5155a300 (MDEV-22871)
      introduced a race condition with buffer pool resizing.
      
      We have an execution trace where buf_pool.page_hash.array is changed
      to point to something else while page_hash_latch::read_lock() is
      executing. The same should also affect page_hash_latch::write_lock().
      
      We fix the race condition by never resizing (and reallocating) the
      buf_pool.page_hash. We assume that resizing the buffer pool is
      a rare operation. Yes, there might be a performance regression if a
      server is first started up with a tiny buffer pool, which is later
      enlarged. In that case, the tiny buf_pool.page_hash.array could cause
      increased use of the hash bucket lists. That problem can be worked
      around by initially starting up the server with a larger buffer pool
      and then shrinking that, until changing to a larger size again.
      
      buf_pool_t::resize_hash(): Remove.
      
      buf_pool_t::page_hash_table::lock(): Do not attempt to deal with
      hash table resizing. If we really wanted that in a safe manner,
      we would probably have to introduce a global rw-lock around the
      operation, or at the very least, poll buf_pool.resizing, both of
      which would be detrimental to performance.
      bd5a6403
  3. 02 Jul, 2021 29 commits
  4. 01 Jul, 2021 6 commits
    • Marko Mäkelä's avatar
      MDEV-26052 Assertion prebuilt->trx_id < table->def_trx_id failed · 315380a4
      Marko Mäkelä authored
      ha_innobase::truncate(): If the operation fails, preserve
      also dict_table_t::def_trx_id.
      
      This fixes a regression that had been introduced in
      commit 1bd681c8 (MDEV-25506).
      315380a4
    • Marko Mäkelä's avatar
      MDEV-25919 preparation: Remove trx_t::internal · ed6b2307
      Marko Mäkelä authored
      With commit 1bd681c8 (MDEV-25506)
      it no longer is necessary to run DDL and DML operations in
      separate transactions. Let us remove the flag trx_t::internal.
      Dictionary transactions will be distinguished by trx_t::dict_operation.
      ed6b2307
    • Marko Mäkelä's avatar
      Cleanup: Remove pointer indirection for trx_t::xid · 0a67b15a
      Marko Mäkelä authored
      The trx_t::xid is always allocated, so we might as well allocate it
      directly in the trx_t object to improve the locality of reference.
      0a67b15a
    • Marko Mäkelä's avatar
      MDEV-24671 fixup: Fix an off-by-one error · 83234719
      Marko Mäkelä authored
      In commit e71e6133 we
      accidentally made innodb_lock_wait_timeout=100000000
      a "literal" value, not the smallest special value that
      would mean "infinite" timeout.
      83234719
    • Marko Mäkelä's avatar
      MDEV-25902 Unexpected ER_LOCK_WAIT_TIMEOUT and result · 161e4bfa
      Marko Mäkelä authored
      trans_rollback_to_savepoint(): Only release metadata locks (MDL)
      if the storage engines agree, after the changes were already rolled back.
      
      Ever since commit 3792693f
      and mysql/mysql-server@55ceedbc3feb911505dcba6cee8080d55ce86dda
      we used to cheat here and always release MDL if the binlog is disabled.
      
      MDL are supposed to prevent race conditions between DML and DDL also
      when no replication is in use. MDL are supposed to be a superset of
      InnoDB table locks: InnoDB table lock may only exist if the thread
      also holds MDL on the table name.
      
      In the included test case, ROLLBACK TO SAVEPOINT would wrongly release
      the MDL on both tables and let ALTER TABLE proceed, even though the DML
      transaction is actually holding locks on the table.
      
      Until commit 1bd681c8 (MDEV-25506)
      InnoDB worked around the locking violation in a blatantly non-ACID way:
      If locks exist on a table that is being dropped (in this case, actually
      a partition of a table that is being rebuilt by ALTER TABLE), InnoDB
      would move the table (or partition) into a queue, to be dropped after
      the locks and references had been released.
      
      The scenario of commit 3792693f
      is unaffected by this fix, because mariadb-dump (a.k.a. mysqldump)
      would use non-locking reads, and the transaction would not be holding
      any InnoDB locks during the execution of ROLLBACK TO SAVEPOINT.
      MVCC reads inside InnoDB are only covered by MDL and page latches,
      not by any table or record locks.
      
      FIXME: It would be nice if storage engines were specifically asked
      which MDL can be released, instead of only offering a choice
      between all or nothing. InnoDB should be able to release any
      locks for tables that are no longer in trx_t::mod_tables, except
      if another transaction had converted some implicit record locks
      to explicit ones, before the ROLLBACK TO SAVEPOINT had been completed.
      
      Reviewed by: Sergei Golubchik
      161e4bfa
    • Marko Mäkelä's avatar
      MDEV-26067 innodb_lock_wait_timeout values above 100,000,000 are useless · 8c5c3a45
      Marko Mäkelä authored
      The practical maximum value of the parameter innodb_lock_wait_timeout
      is 100,000,000. Any value larger than that specifies an infinite timeout.
      
      Therefore, we should make 100,000,000 the maximum value of the parameter.
      8c5c3a45