- 05 Nov, 2021 1 commit
-
-
Marko Mäkelä authored
In commit c091a0bc we removed the use of the HASH_ macros for inserting into buf_pool.page_hash, or accessing buf_page_t::hash. However, the binary buddy allocator for block->page.zip.data would still use the HASH_ macros. HASH_INSERT and not HASH_DELETE would reset the next-block pointer to the null pointer. Our replacement of HASH_DELETE() will reset the next-block pointer, and the replacement of HASH_INSERT() assumes that the pointer is the null pointer. buf_LRU_block_free_non_file_page(): Assert that the next-block pointer is the null pointer. buf_buddy_block_free(): Reset the pointer before invoking buf_LRU_block_free_non_file_page(). Without this, the added assertion would fail in the test encryption.innochecksum.
-
- 04 Nov, 2021 3 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
old ftp.pcre.org is apparently down, www.pcre.org says to use github as the primary download location
-
Sergei Golubchik authored
-
- 03 Nov, 2021 4 commits
-
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
- 02 Nov, 2021 12 commits
-
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
Marko Mäkelä authored
This is follow-up to commit 1193a793. We will set innodb_use_native_aio=OFF by default also in mariadb-backup when running on a potentially affected kernel.
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
Jan Lindström authored
* Fix error handling NULL-pointer reference * Add mtr-suppression on galera_ssl_upgrade
-
Jan Lindström authored
* Fix error handling NULL-pointer reference * Add mtr-suppression on galera_ssl_upgrade
-
Jan Lindström authored
* Fix error handling NULL-pointer reference * Add mtr-suppression on galera_ssl_upgrade
-
Jan Lindström authored
Use better error message when KILL fails even in case TOI fails.
-
- 01 Nov, 2021 1 commit
-
-
Jan Lindström authored
* Fix error handling NULL-pointer reference * Add mtr-suppression on galera_ssl_upgrade
-
- 30 Oct, 2021 2 commits
-
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
- 29 Oct, 2021 17 commits
-
-
sjaakola authored
Mutex order violation when wsrep bf thread kills a conflicting trx, the stack is wsrep_thd_LOCK() wsrep_kill_victim() lock_rec_other_has_conflicting() lock_clust_rec_read_check_and_lock() row_search_mvcc() ha_innobase::index_read() ha_innobase::rnd_pos() handler::ha_rnd_pos() handler::rnd_pos_by_record() handler::ha_rnd_pos_by_record() Rows_log_event::find_row() Update_rows_log_event::do_exec_row() Rows_log_event::do_apply_event() Log_event::apply_event() wsrep_apply_events() and mutexes are taken in the order lock_sys->mutex -> victim_trx->mutex -> victim_thread->LOCK_thd_data When a normal KILL statement is executed, the stack is innobase_kill_query() kill_handlerton() plugin_foreach_with_mask() ha_kill_query() THD::awake() kill_one_thread() and mutexes are victim_thread->LOCK_thd_data -> lock_sys->mutex -> victim_trx->mutex This patch is the plan D variant for fixing potetial mutex locking order exercised by BF aborting and KILL command execution. In this approach, KILL command is replicated as TOI operation. This guarantees total isolation for the KILL command execution in the first node: there is no concurrent replication applying and no concurrent DDL executing. Therefore there is no risk of BF aborting to happen in parallel with KILL command execution either. Potential mutex deadlocks between the different mutex access paths with KILL command execution and BF aborting cannot therefore happen. TOI replication is used, in this approach, purely as means to provide isolated KILL command execution in the first node. KILL command should not (and must not) be applied in secondary nodes. In this patch, we make this sure by skipping KILL execution in secondary nodes, in applying phase, where we bail out if applier thread is trying to execute KILL command. This is effective, but skipping the applying of KILL command could happen much earlier as well. This also fixed unprotected calls to wsrep_thd_abort that will use wsrep_abort_transaction. This is fixed by holding THD::LOCK_thd_data while we abort transaction. Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
-
Jan Lindström authored
Revert "MDEV-23328 Server hang due to Galera lock conflict resolution" This reverts commit eac8341d.
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
Marko Mäkelä authored
row_undo_mod_clust_low(): If we are in recovery and rolling back a DELETE operation on the SYS_INDEXES table, and the SYS_INDEXES.NAME starts with the magic byte 0xff that identifies uncommitted ADD INDEX stubs, we must not try to evict the table definition because such index stubs would be skipped by dict_load_indexes() anyway.
-
Marko Mäkelä authored
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
Marko Mäkelä authored
MDEV-25683 Atomic DDL: With innodb_force_recovery=3 InnoDB: Trying to load index but the index tree has been freed The purpose of the parameter innodb_force_recovery is to allow some data to be dumped from a corrupted database. Its values used to be as follows: innodb_force_recovery=0: normal (default) innodb_force_recovery=1: ignore (skip log for) corrupted pages or missing data files when applying the redo log innodb_force_recovery=2: additionally, disable background tasks (such as the purge of committed undo logs) innodb_force_recovery=3: additionally, disable the rollback of recovered incomplete (not committed or XA PREPARE) transactions innodb_force_recovery=4: same as 3 (since MDEV-19514 in MariaDB 10.5) innodb_force_recovery=5: additionally, do not process any undo log, disallow any writes, and force READ UNCOMMITTED isolation level innodb_force_recovery=6: additionally, pretend that ib_logfile0 does not exist (prevent any recovery). Never use this! The bad thing that happens with innodb_force_recovery=3 and innodb_force_recovery=4 is that also the rollback of any recovered DDL transaction will be skipped. This would break the DDL log recovery that was introduced in MDEV-17567. For one data directory sample, the DDL log recovery would hangs due to a conflict on the InnoDB SYS_TABLES table, because the lock holder transaction was not rolled back due to innodb_force_recovery=3. Fix: Make innodb_force_recovery=3 skip the DML transaction rollback only, and make innodb_force_recovery=4 (renamed to SRV_FORCE_NO_DDL_UNDO) behave like innodb_force_recovery=3 used to (skip the rollback of all recovered transactions, both DML and DDL). Startup with innodb_force_recovery=4 will be unaffected by this change. (There may be hangs, possibly preceded by messages about failing to load an index.) Side note: With innodb_force_recovery=5, any DDL log for InnoDB tables will be essentially ignored by InnoDB, but the server will start up.
-
sjaakola authored
Mutex order violation when wsrep bf thread kills a conflicting trx, the stack is wsrep_thd_LOCK() wsrep_kill_victim() lock_rec_other_has_conflicting() lock_clust_rec_read_check_and_lock() row_search_mvcc() ha_innobase::index_read() ha_innobase::rnd_pos() handler::ha_rnd_pos() handler::rnd_pos_by_record() handler::ha_rnd_pos_by_record() Rows_log_event::find_row() Update_rows_log_event::do_exec_row() Rows_log_event::do_apply_event() Log_event::apply_event() wsrep_apply_events() and mutexes are taken in the order lock_sys->mutex -> victim_trx->mutex -> victim_thread->LOCK_thd_data When a normal KILL statement is executed, the stack is innobase_kill_query() kill_handlerton() plugin_foreach_with_mask() ha_kill_query() THD::awake() kill_one_thread() and mutexes are victim_thread->LOCK_thd_data -> lock_sys->mutex -> victim_trx->mutex This patch is the plan D variant for fixing potetial mutex locking order exercised by BF aborting and KILL command execution. In this approach, KILL command is replicated as TOI operation. This guarantees total isolation for the KILL command execution in the first node: there is no concurrent replication applying and no concurrent DDL executing. Therefore there is no risk of BF aborting to happen in parallel with KILL command execution either. Potential mutex deadlocks between the different mutex access paths with KILL command execution and BF aborting cannot therefore happen. TOI replication is used, in this approach, purely as means to provide isolated KILL command execution in the first node. KILL command should not (and must not) be applied in secondary nodes. In this patch, we make this sure by skipping KILL execution in secondary nodes, in applying phase, where we bail out if applier thread is trying to execute KILL command. This is effective, but skipping the applying of KILL command could happen much earlier as well. This also fixed unprotected calls to wsrep_thd_abort that will use wsrep_abort_transaction. This is fixed by holding THD::LOCK_thd_data while we abort transaction. Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
-
Jan Lindström authored
Revert "MDEV-23328 Server hang due to Galera lock conflict resolution" This reverts commit 29bbcac0.
-
sjaakola authored
Mutex order violation when wsrep bf thread kills a conflicting trx, the stack is wsrep_thd_LOCK() wsrep_kill_victim() lock_rec_other_has_conflicting() lock_clust_rec_read_check_and_lock() row_search_mvcc() ha_innobase::index_read() ha_innobase::rnd_pos() handler::ha_rnd_pos() handler::rnd_pos_by_record() handler::ha_rnd_pos_by_record() Rows_log_event::find_row() Update_rows_log_event::do_exec_row() Rows_log_event::do_apply_event() Log_event::apply_event() wsrep_apply_events() and mutexes are taken in the order lock_sys->mutex -> victim_trx->mutex -> victim_thread->LOCK_thd_data When a normal KILL statement is executed, the stack is innobase_kill_query() kill_handlerton() plugin_foreach_with_mask() ha_kill_query() THD::awake() kill_one_thread() and mutexes are victim_thread->LOCK_thd_data -> lock_sys->mutex -> victim_trx->mutex This patch is the plan D variant for fixing potetial mutex locking order exercised by BF aborting and KILL command execution. In this approach, KILL command is replicated as TOI operation. This guarantees total isolation for the KILL command execution in the first node: there is no concurrent replication applying and no concurrent DDL executing. Therefore there is no risk of BF aborting to happen in parallel with KILL command execution either. Potential mutex deadlocks between the different mutex access paths with KILL command execution and BF aborting cannot therefore happen. TOI replication is used, in this approach, purely as means to provide isolated KILL command execution in the first node. KILL command should not (and must not) be applied in secondary nodes. In this patch, we make this sure by skipping KILL execution in secondary nodes, in applying phase, where we bail out if applier thread is trying to execute KILL command. This is effective, but skipping the applying of KILL command could happen much earlier as well. This also fixed unprotected calls to wsrep_thd_abort that will use wsrep_abort_transaction. This is fixed by holding THD::LOCK_thd_data while we abort transaction. Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
-
Jan Lindström authored
Revert "MDEV-23328 Server hang due to Galera lock conflict resolution" This reverts commit eac8341d.
-
sjaakola authored
Mutex order violation when wsrep bf thread kills a conflicting trx, the stack is wsrep_thd_LOCK() wsrep_kill_victim() lock_rec_other_has_conflicting() lock_clust_rec_read_check_and_lock() row_search_mvcc() ha_innobase::index_read() ha_innobase::rnd_pos() handler::ha_rnd_pos() handler::rnd_pos_by_record() handler::ha_rnd_pos_by_record() Rows_log_event::find_row() Update_rows_log_event::do_exec_row() Rows_log_event::do_apply_event() Log_event::apply_event() wsrep_apply_events() and mutexes are taken in the order lock_sys->mutex -> victim_trx->mutex -> victim_thread->LOCK_thd_data When a normal KILL statement is executed, the stack is innobase_kill_query() kill_handlerton() plugin_foreach_with_mask() ha_kill_query() THD::awake() kill_one_thread() and mutexes are victim_thread->LOCK_thd_data -> lock_sys->mutex -> victim_trx->mutex This patch is the plan D variant for fixing potetial mutex locking order exercised by BF aborting and KILL command execution. In this approach, KILL command is replicated as TOI operation. This guarantees total isolation for the KILL command execution in the first node: there is no concurrent replication applying and no concurrent DDL executing. Therefore there is no risk of BF aborting to happen in parallel with KILL command execution either. Potential mutex deadlocks between the different mutex access paths with KILL command execution and BF aborting cannot therefore happen. TOI replication is used, in this approach, purely as means to provide isolated KILL command execution in the first node. KILL command should not (and must not) be applied in secondary nodes. In this patch, we make this sure by skipping KILL execution in secondary nodes, in applying phase, where we bail out if applier thread is trying to execute KILL command. This is effective, but skipping the applying of KILL command could happen much earlier as well. This also fixed unprotected calls to wsrep_thd_abort that will use wsrep_abort_transaction. This is fixed by holding THD::LOCK_thd_data while we abort transaction. Reviewed-by: Jan Lindström <jan.lindstrom@mariadb.com>
-
Jan Lindström authored
Revert "MDEV-23328 Server hang due to Galera lock conflict resolution" This reverts commit 29bbcac0.
-