- 21 Feb, 2023 8 commits
-
-
Michael Widenius authored
-
Monty authored
The old code in prev_record_reads() did give wrong estimates when a join_buffer was used or if the table was depending on more than one other tables. When join_cache is used, it will cause a re-order of row combinations, which causes more calls to the engine for tables that are depending on tables before the join_cached one. The new prev_records_read() code provides more exact estimates and should never give a 'too low estimate', assuming that the data to the function is correct The definition of prev_record_read() is also updated. The new definition is: "Estimate the number of engine ha_index_read_calls for EQ_REF tables when taking into account the one-row-cache in join_read_always_key()" The cost of using prev_record_reads() value is changed. The value is now used similar as before to calculate the cost of the storage engine calls. However the cost of the WHERE cost is changed to take into account the total number of row combinations as the WHERE has to be checked even if the one-row-cache is used. This makes the cost slightly higher than before (for the same prev_record_reads() value). Other things: - Cached return value of prev_record_read() in best_access_path() to avoid some function calls. - Fixed bug where position[].use_join_buffer was set in best_acess_path() when join buffer was not used. This confused the semi join optimizer to try to reoptimize plans that did not need to be reoptimized. The effect of the bug fix is that we avoid doing some re-optimziations with semi-joins when join_buffer is not used. In these cases the value shown for the 'Filtering' column in EXPLAIN EXTENDED may change. - Added 'prev_record.cc' that was used to verify the logic in prev_record_reads(). Changes in test suite: - EQ_REF tables are moved up to be earlier. This is because either the higher WHERE cost when EQ_REF is used with more row combination or change of cost when using join_cache. - Filtered has changed (to the better) for some cases using semi-joins subselect_sj.test subselect_sj_jcl6.test
-
Monty authored
Author: Sergei Petrunia <sergey@mariadb.com>
-
Monty authored
- Use log2() insted of log() - Added missing ''+' when calculating rowid setup cost - Adjusted ROWID_FILTER_PER_ELEMENT_MODIFIER (from 3 to 1) Other things: - Adjusted cost for index_merge where rows_out < 1.0 The effects of the changes: - rowid filter will have higher setup cost - rowid filter will have slightly less costs per row This can be seen in mtr where some tests, with 'small tables or that uses rowid filters with many rows, will not use rowid filter anymore.
-
Sergei Petrunia authored
Part #2: fix the case where table->stat_records()=1 (due to EITS statistics), but the range returns rows=0.
-
Sergei Petrunia authored
Amended patch from Monty: The issue was that Loose_scan_opt::save_to_position() did not take into account records_out from best_access_path() Make sure that POSITION object filled by Loose_scan_opt::save_to_position() has records_out not higher than any other possible access method.
-
Marko Mäkelä authored
There is a little used option innodb_defragment that would make OPTIMIZE TABLE not rebuild the table as usual for InnoDB, but instead cause the index B-trees to be optimized in place. This option uses excessive locking (exclusively locking index trees). It never covered SPATIAL INDEX or FULLTEXT INDEX. Storage space was never reclaimed. Because this option is not particularly useful and causes a maintenance burden (most recently in commit de4030e4), it is best to deprecate it, to prepare for its removal.
-
Marko Mäkelä authored
-
- 20 Feb, 2023 1 commit
-
-
Sergei Golubchik authored
-
- 17 Feb, 2023 1 commit
-
-
Thirunarayanan Balathandayuthapani authored
is being called - Recovery fails to release log_sys.latch for non-last batch before preallocates the block in recv_read_in_area().
-
- 16 Feb, 2023 16 commits
-
-
Sergei Golubchik authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
This is a partial revert of commit 8b6a308e (MDEV-29883) and a follow-up to the merge commit 394fc71f (MDEV-24569). The latching order related to any operation that accesses the allocation metadata of an InnoDB index tree is as follows: 1. Acquire dict_index_t::lock in non-shared mode. 2. Acquire the index root page latch in non-shared mode. 3. Possibly acquire further index page latches. Unless an exclusive dict_index_t::lock is held, this must follow the root-to-leaf, left-to-right order. 4. Acquire a *non-shared* fil_space_t::latch. 5. Acquire latches on the allocation metadata pages. 6. Possibly allocate and write some pages, or free some pages. btr_get_size_and_reserved(), dict_stats_update_transient_for_index(), dict_stats_analyze_index(): Acquire an exclusive fil_space_t::latch in order to avoid a deadlock in fseg_n_reserved_pages() in case of concurrent access to multiple indexes sharing the same "inode page". fseg_page_is_allocated(): Acquire an exclusive fil_space_t::latch in order to avoid deadlocks. All callers are holding latches on a buffer pool page, or an index, or both. Before commit edbde4a1 (MDEV-24167) a third mode was available that would not conflict with the shared fil_space_t::latch acquired by ha_innobase::info_low(), i_s_sys_tablespaces_fill_table(), or i_s_tablespaces_encryption_fill_table(). Because those calls should be rather rare, it makes sense to use the simple rw_lock with only shared and exclusive modes. fil_crypt_get_page_throttle(): Avoid invoking fseg_page_is_allocated() on an allocation bitmap page (which can never be freed), to avoid acquiring a shared latch on top of an exclusive one. mtr_t::s_lock_space(), MTR_MEMO_SPACE_S_LOCK: Remove.
-
Marko Mäkelä authored
buf_pool_t::watch_set(): Always buffer-fix a block if one was found, no matter if it is a watch sentinel or a buffer page. The type of the block descriptor will be rechecked in buf_page_t::watch_unset(). Do not expect the caller to acquire the page hash latch. Starting with commit bd5a6403 it is safe to release buf_pool.mutex before acquiring a buf_pool.page_hash latch. buf_page_get_low(): Adjust to the changed buf_pool_t::watch_set(). This simplifies the logic and fixes a bug that was reproduced when using debug builds and the setting innodb_change_buffering_debug=1.
-
Marko Mäkelä authored
buf_read_page_low(): Map the buf_page_t::read_complete() return value DB_FAIL to DB_PAGE_CORRUPTED. The purpose of the DB_FAIL return value is to avoid error log noise when read-ahead brings in an unused page that is typically filled with NUL bytes. If a synchronous read is bringing in a corrupted page where the page frame does not contain the expected tablespace identifier and page number, that must be treated as an attempt to read a corrupted page. The correct error code for this is DB_PAGE_CORRUPTED. The error code DB_FAIL is not handled by row_mysql_handle_errors(). This was missed in commit 0b47c126 (MDEV-13542).
-
Marko Mäkelä authored
-
- 15 Feb, 2023 11 commits
-
-
Julius Goryavsky authored
Post-fix to MDEV-30318 and MDEV-22570-related changes: unified handling of wsrep_provider by code so that "none" is interpreted as case-insensitive everywhere and that work with an empty string is supported everywhere.
-
Marko Mäkelä authored
This almost completely reverts commit acd23da4 and retains a safe optimization: recv_sys_t::parse(): Remove any old redo log records for the truncated tablespace, to free up memory earlier. If recovery consists of multiple batches, then recv_sys_t::apply() will must invoke recv_sys_t::trim() again to avoid wrongly applying old log records to an already truncated undo tablespace.
-
Vicențiu Ciorbaru authored
WITH TIES would not take effect if SELECT DISTINCT was used in a context where an INDEX is used to resolve the ORDER BY clause. WITH TIES relies on the `JOIN::order` to contain the non-constant fields to test the equality of ORDER BY fiels required for WITH TIES. The cause of the problem was a premature removal of the `JOIN::order` member during a DISTINCT optimization. This lead to WITH TIES code assuming ORDER BY only contained "constant" elements. Disable this optimization when WITH TIES is in effect. (side-note: the order by removal does not impact any current tests, thus it will be removed in a future version) Reviewed by: monty@mariadb.org
-
Vicențiu Ciorbaru authored
-
Vicențiu Ciorbaru authored
-
Monty authored
(fp->field_length is always >= 0)
-
Monty authored
This allows one to always use --skip-create-table for repeated runs.
-
Monty authored
There was a bug in JOIN::make_notnull_conds_for_range_scans() when clearing TABLE->tmp_set, which was used to mark fields that could not be null. This function was only used if 'not_null_range_scan=on' is set. The effect was that tmp_set contained a 'random value' and this caused the optimizer to think that some fields could not be null. FLUSH TABLES clears tmp_set and because of this things worked temporarily. Fixed by clearing tmp_set properly.
-
Sergei Petrunia authored
-
Sergei Petrunia authored
-
Sergei Petrunia authored
JOIN_CACHE::alloc_buffer() used wrong logic when calculating the size of all join buffers. Then, it computed the ratio by which JOIN::shrink_join_buffers() should shrink the buffers. shrink_join_buffers() ended up in a situation where buffers would not fit into the total quota after shrinking, which resulted in negative buffer sizes. Due to use of unsigned integers it would cause very large buffers to be used instead. Make JOIN_CACHE::alloc_buffer() use the same logic as JOIN::shrink_join_buffers() when it calculates the total size of all join buffers so far. Also, add a safety check in JOIN::shrink_join_buffers() This patch doesn't include a testcase, because the original test dataset is too big and fragile. We have dbt3_s001.inc but I wasn't able to demonstrate the issue with it.
-
- 14 Feb, 2023 3 commits
-
-
Monty authored
-
Andrew Hutchings authored
Very minor hits found by Coverity for the S3 engine.
-
Marko Mäkelä authored
-