- 03 Mar, 2021 1 commit
-
-
Marko Mäkelä authored
In commit 8d16da14 (MDEV-24789) we accidentally introduced a race condition. During the time a waiting lock request is being removed, the request might be moved to another page due to a concurrent page split or merge. To prevent this, we must hold exclusive lock_sys.latch when releasing a record lock. lock_release_autoinc_locks(): Avoid a potential hang. No dict_table_t::lock_mutex must be waited for while already holding lock_sys.wait_mutex or trx_t::mutex. lock_cancel_waiting_and_release(): Correctly handle AUTO_INCREMENT locks.
-
- 02 Mar, 2021 6 commits
-
-
Marko Mäkelä authored
The PERFORMANCE_SCHEMA wrapper for mutex and rw-lock operations is causing a lot of unlikely code to be inlined in each invocation. The impact of this may have been emphasized in MariaDB 10.6, because InnoDB now uses the common implementation of mutexes and condition variables (MDEV-21452). By default, we build with cmake -DPLUGIN_PERFSCHEMA enabled, but at runtime no instrumentation will be enabled. Similar to commit eba2d10a we had better avoid inlining the rarely executed code in order to reduce the code size and to improve the efficiency of the instruction cache. This change was extensively tested by Axel Schwenke with and without --enable-performance-schema (with no individual instruments enabled). Removing the inline functions did not cause any performance regression in either case. There seemed to be a tiny improvement, possibly due to reduced code size and better instruction cache hit rate.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
lock_release_try(): Implement innodb_evict_tables_on_commit_debug. Before releasing any locks, collect the identifiers of tables to be evicted. After releasing all locks, look up for the tables and evict them if it is safe to do so. trx_t::commit_tables(): Remove the eviction logic. trx_t::commit_in_memory(): Invoke release_locks() only after commit_tables().
-
Marko Mäkelä authored
-
Marko Mäkelä authored
lock_sys_t::deadlock_check(): Assume that only lock_sys.wait_mutex is being held by the caller. lock_sys_t::rd_lock_try(): New function. lock_sys_t::cancel(trx_t*): Kill an active transaction that may be holding a lock. lock_sys_t::cancel(trx_t*, lock_t*): Cancel a waiting lock request. lock_trx_handle_wait(): Avoid acquiring mutexes in some cases, and in never acquire lock_sys.latch in exclusive mode. This function is only invoked in a semi-consistent read (locking a clustered index record only if it matches the search condition). Normally, lock_wait() will take care of lock waits. lock_wait(): Invoke the new function lock_sys_t::cancel() at the end, to avoid acquiring exclusive lock_sys.latch. lock_rec_other_trx_holds_expl(): Use LockGuard instead of LockMutexGuard. lock_release_autoinc_locks(): Explicitly acquire table->lock_mutex, in case only a shared lock_sys.latch is being held. Deadlock::report() will still hold exclusive lock_sys.latch while invoking lock_cancel_waiting_and_release(). lock_cancel_waiting_and_release(): Acquire trx->mutex in this function, instead of expecting the caller to do so. lock_unlock_table_autoinc(): Only acquire shared lock_sys.latch. lock_table_has_locks(): Do not acquire lock_sys.latch at all. Deadlock::check_and_resolve(): Only acquire shared lock_sys.latchm for invoking lock_sys_t::cancel(trx, wait_lock). innobase_query_caching_table_check_low(), row_drop_tables_for_mysql_in_background(): Do not acquire lock_sys.latch.
-
Marko Mäkelä authored
The test case encryption.innodb_encrypt_freed was failing in MemorySanitizer builds. recv_recover_page(): Mark non-recovered pages as freed. fil_crypt_rotate_page(): Before comparing the block->frame contents, check if the block was marked as freed. Other places: Whenever using BUF_GET_POSSIBLY_FREED, check the block->page.status before accessing the page frame. (Both uses of BUF_GET_IF_IN_POOL should be correct now.)
-
- 01 Mar, 2021 3 commits
-
-
Sergei Golubchik authored
disable warnings, as they're different on 32bit platforms Closes #1757
-
Nayuta Yanagisawa authored
Add missing DBUG_RETURN to my_malloc.
-
Jan Lindström authored
Added a new wsrep_mode feature DISALLOW_LOCAL_GTID for this. Nodes can have GTIDs for local transactions in the following scenarios: A DDL statement is executed with wsrep_OSU_method=RSU set. A DML statement writes to a non-InnoDB table. A DML statement writes to an InnoDB table with wsrep_on=OFF set. If user has set wsrep_mode=DISALLOW_LOCAL_GTID these operations produce a error ERROR HY000: Galera replication not supported
-
- 26 Feb, 2021 6 commits
-
-
Thirunarayanan Balathandayuthapani authored
- This is caused by commit deadec4e (MDEV-24569). InnoDB fails to set the tablespace associated with mini-transacton while resetting the change buffer bitmap bits of the page.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
A performance regression was introduced by commit e71e6133 (MDEV-24671) and mostly addressed by commit 455514c8. The regression is likely caused by increased contention lock_sys.latch (former lock_sys.mutex), possibly indirectly caused by contention on lock_sys.wait_mutex. This change aims to reduce both, but further improvements will be needed. lock_wait(): Minimize the lock_sys.wait_mutex hold time. lock_sys_t::deadlock_check(): Add a parameter for indicating whether lock_sys.latch is exclusively locked. trx_t::was_chosen_as_deadlock_victim: Always use atomics. lock_wait_wsrep(): Assume that no mutex is being held. Deadlock::report(): Always kill the victim transaction. lock_sys_t::timeout: New counter to back MONITOR_TIMEOUT.
-
Sachin authored
MDEV-7409 On RBR, extend the PROCESSLIST info to include at least the name of the recently used table When RBR is used, add the db name to db Field and table name to Status Field of the "SHOW FULL PROCESSLIST" command for SQL thread.
-
Daniel Black authored
-
Daniel Black authored
-
- 25 Feb, 2021 9 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Varun Gupta authored
-
Jan Lindström authored
Introduced two new wsrep_mode options * REPLICATE_MYISAM * REPLICATE_ARIA Depracated wsrep_replicate_myisam parameter and we use wsrep_mode = REPLICATE_MYISAM instead. This required small refactoring of wsrep_check_mode_after_open_table so that both MyISAM and Aria are handled on required DML cases. Similarly, added Aria to wsrep_should_replicate_ddl to handle DDL for Aria tables using TOI. Added test cases and improved MyISAM testing. Changed use of wsrep_replicate_myisam to wsrep_mode = REPLICATE_MYISAM
-
Daniel Black authored
-
Daniel Black authored
Correcting an incorrect merge from 10.2
-
Daniel Black authored
-
Daniel Black authored
-
Daniel Black authored
Backport of 4bc31a90 Include client libraries for auth caching_sha2_password and sha256_password in the libmariadb3 client library package.
-
- 24 Feb, 2021 13 commits
-
-
Daniel Black authored
Like the 10.2 version 1635686b, except C++ on internal functions for my_assume_aligned. volatile != atomic. volatile has no memory barrier schemantics, its for mmaped IO so lets allow some optimizer gains and stop pretending it helps with memory atomicity. The MDEV lists a SEGV an assumption is made that an address was partially read. As C packs structs strictly in order and on arm64 the cache line size is 128 bits. A pointer (link - 64 bits), followed by a hashnr (uint32 - 32 bits), leaves the following key (uchar * 64 bits), neither naturally aligned to any pointer and worse, split across a cache line which is the processors view of an atomic reservation of memory. lf_dynarray_lvalue is assumed to return a 64 bit aligned address. As a solution move the 32bit hashnr to the end so we don't get the *key pointer split across two cache lines. Tested by: Krunal Bauskar Reviewer: Marko Mäkelä
-
Daniel Black authored
volatile != atomic. volatile has no memory barrier schemantics, its for mmaped IO so lets allow some optimizer gains and stop pretending it helps with memory atomicity. The MDEV lists a SEGV an assumption is made that an address was partially read. As C packs structs strictly in order and on arm64 the cache line size is 128 bits. A pointer (link - 64 bits), followed by a hashnr (uint32 - 32 bits), leaves the following key (uchar * 64 bits), neither naturally aligned to any pointer and worse, split across a cache line which is the processors view of an atomic reservation of memory. lf_dynarray_lvalue is assumed to return a 64 bit aligned address. As a solution move the 32bit hashnr to the end so we don't get the *key pointer split across two cache lines. Tested by: Krunal Bauskar Reviewer: Marko Mäkelä
-
Igor Babaev authored
This bug caused crashes of the server when processing queries with table value constructors (TVC) that contained subqueries and were used itself as subselects. For such TVCs the following transformation is applied at the prepare stage: VALUES (v1), ... (vn) => SELECT * FROM (VALUES (v1), ... (vn)) tvc_x. This transformation allows to reduce the problem of evaluation of TVCs used as subselects to the problem of evaluation of regular subselects. The transformation is implemented in the wrap_tvc(). The code the function to mimic the behaviour of the parser when processing the result of the transformation. However this imitation was not free of some flaws. First the function called the method exclude() that completely destroyed the select tree structures below the transformed TVC. Second the function used the procedure mysql_new_select to create st_select_lex nodes for both wrapping select of the transformation and TVC. This also led to constructing of invalid select tree structures. The patch actually re-engineers the code of wrap_tvc(). Approved by Oleksandr Byelkin <sanja@mariadb.com>
-
Jan Lindström authored
Problem was that we used heap allocated key using too small array. Fixed by using dynamic memory allocation using actual needed size.
-
Marko Mäkelä authored
rw_lock::upgrade_trylock(): If the compare-and-swap fails, only assert that we are still holding the U lock and that no conflicting lock exists. If the upgrade to X would fail due to some thread holding an S latch, we will terminate the loop.
-
Marko Mäkelä authored
trx_t::commit_in_memory(): Invoke mod_tables.clear(). trx_free_at_shutdown(): Invoke mod_tables.clear() for transactions that are discarded on shutdown. Everywhere else, assert mod_tables.empty() on freed transaction objects.
-
Marko Mäkelä authored
Let us calculate the hash table cell address while we are calculating the latch address, to avoid repeated computations of the address. The latch address can be derived from the cell address with a simple bitmask operation.
-
Jan Lindström authored
Null poiter reference in case where bf_thd has no trx .e.g. when we have MDL-conflict.
-
Jan Lindström authored
Null poiter reference in case where bf_thd has no trx .e.g. when we have MDL-conflict.
-
Sergei Petrunia authored
The problem was in and_all_keys(), the code of MDEV-9759 which calculates the new tree weight: First, it didn't take into account the case when (next->next_key_part=tmp) == NULL and dereferenced a NULL pointer when getting tmp->weight. Second, "if (param->alloced_sel_args > SEL_ARG::MAX_SEL_ARGS) break" could leave the loop with incorrect value of weight. Fixed by introducing SEL_ARG::update_weight_locally() and calling it at the end of the function. This allows to avoid caring about all the above cases.
-
Daniel Black authored
filename_hash is a function from libiberty.a from the system but also an expored name in the perf schema static library. We'll use a different name.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
- 23 Feb, 2021 2 commits
-
-
Vicențiu Ciorbaru authored
-
Vicențiu Ciorbaru authored
-