- 17 May, 2020 1 commit
-
-
Thirunarayanan Balathandayuthapani authored
Problem: ======= - During alter rebuild, document read from old table is tokenzied parallelly by innodb_ft_sort_pll_degree threads and stores it in respective merge files. While doing the parallel merge, InnoDB wrongly skips the root level selection of merging buffer records. So it leads to insertion of merge records in non-ascending order. Solution: ========== Build selection tree for the root level also. So that root of selection tree can always contain sorted buffer.
-
- 16 May, 2020 4 commits
-
-
Alexander Barkov authored
-
Alexander Barkov authored
Also, adding 10.2 related changes for MDEV-22579
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
- 15 May, 2020 15 commits
-
-
Marko Mäkelä authored
In commit b1742a5c we forgot FLUSH TABLES, potentially causing errors for MyISAM system tables.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
The rw_lock_stats were incorrectly updated. While global statistics have limited usefulness, we cannot remove them from a GA version. This contribution is slightly improving performance in write workloads.
-
Alexander Barkov authored
The code erroneously allowed both: INSERT INTO t1 (vcol) VALUES (DEFAULT); INSERT INTO t1 (vcol) VALUES (DEFAULT(non_virtual_column)); The former is OK, but the latter is not. Adding a new virtual method in Item: virtual bool vcol_assignment_allowed_value() const { return false; } Item_null, Item_param and Item_default_value override it. Item_default_value overrides it in the way to: - allow DEFAULT - disallow DEFAULT(col)
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
If the InnoDB buffer pool contains many pages for a table or index that is being dropped or rebuilt, and if many of such pages are pointed to by the adaptive hash index, dropping the adaptive hash index may consume a lot of time. The time-consuming operation of dropping the adaptive hash index entries is being executed while the InnoDB data dictionary cache dict_sys is exclusively locked. It is not actually necessary to drop all adaptive hash index entries at the time a table or index is being dropped or rebuilt. We can let the LRU replacement policy of the buffer pool take care of this gradually. For this to work, we must detach the dict_table_t and dict_index_t objects from the main dict_sys cache, and once the last adaptive hash index entry for the detached table is removed (when the garbage page is evicted from the buffer pool) we can free the dict_table_t and dict_index_t object. Related to this, in MDEV-16283, we made ALTER TABLE...DISCARD TABLESPACE skip both the buffer pool eviction and the drop of the adaptive hash index. We shifted the burden to ALTER TABLE...IMPORT TABLESPACE or DROP TABLE. We can remove the eviction from DROP TABLE. We must retain the eviction in the ALTER TABLE...IMPORT TABLESPACE code path, so that in case the discarded table is being re-imported with the same tablespace identifier, the fresh data from the imported tablespace will replace any stale pages in the buffer pool. rpl.rpl_failed_drop_tbl_binlog: Remove the test. DROP TABLE can no longer be interrupted inside InnoDB. fseg_free_page(), fseg_free_step(), fseg_free_step_not_header(), fseg_free_page_low(), fseg_free_extent(): Remove the parameter that specifies whether the adaptive hash index should be dropped. btr_search_lazy_free(): Lazily free an index when the last reference to it is dropped from the adaptive hash index. buf_pool_clear_hash_index(): Declare static, and move to the same compilation unit with the bulk of the adaptive hash index code. dict_index_t::clone(), dict_index_t::clone_if_needed(): Clone an index that is being rebuilt while adaptive hash index entries exist. The original index will be inserted into dict_table_t::freed_indexes and dict_index_t::set_freed() will be called. dict_index_t::set_freed(), dict_index_t::freed(): Note that or check whether the index has been freed. We will use the impossible page number 1 to denote this condition. dict_index_t::n_ahi_pages(): Replaces btr_search_info_get_ref_count(). dict_index_t::detach_columns(): Move the assignment n_fields=0 to ha_innobase_inplace_ctx::clear_added_indexes(). We must have access to the columns when freeing the adaptive hash index. Note: dict_table_t::v_cols[] will remain valid. If virtual columns are dropped or added, the table definition will be reloaded in ha_innobase::commit_inplace_alter_table(). buf_page_mtr_lock(): Drop a stale adaptive hash index if needed. We will also reduce the number of btr_get_search_latch() calls and enclose some more code inside #ifdef BTR_CUR_HASH_ADAPT in order to benefit cmake -DWITH_INNODB_AHI=OFF.
-
Marko Mäkelä authored
When neither MSAN nor Valgrind are enabled, declare Field::mark_unused_memory_as_defined() as an empty inline function, instead of declaring it as a virtual function.
-
Eugene Kosov authored
-
Monty authored
-
Monty authored
This allows us to remove our own declarations of functions and structures that are declared in the history.h file.
-
Monty authored
MDEV-22073 MSAN use-of-uninitialized-value in collect_statistics_for_table() Other things: innodb.analyze_table was changed to mainly test statistic collection. Was discussed with Marko.
-
Varun Gupta authored
The issue here was that when the schema was changed the value for the THD::server_status is ored with SERVER_SESSION_STATE_CHANGED. For custom aggregate functions, currently we check if the server_status is equal to SERVER_STATUS_LAST_ROW_SENT then we should terminate the execution of the custom aggregate function as there are no more rows to fetch. So the check should be that if the server status has the bit set for SERVER_STATUS_LAST_ROW_SENT then we should terminate the execution of the custom aggregate function.
-
Monty authored
Other things: - Removed innodb_encryption_tables.test from valgrind as it takes a REALLY long time
-
Alexander Barkov authored
fix_fields_for_tvc() could call fix_fields() for Items that have already been fixed before. Changing fix_fields() to fix_fields_if_needed().
-
- 14 May, 2020 15 commits
-
-
Varun Gupta authored
For the case when the optimizer does the IN-EXISTS transformation, the equality condition is injected in the WHERE OR HAVING clause of the subquery. If the select list of the subquery has a reference to the parent select make sure to use the reference and not the original item.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
On a checksum failure of a ROW_FORMAT=COMPRESSED page, buf_LRU_free_one_page() would invoke buf_LRU_block_remove_hashed() which will read the uncompressed page frame, although it would not be initialized. With bad enough luck, fil_page_get_type(page) could return an unrecognized value and cause the server to abort. buf_page_io_complete(): On the corruption of a ROW_FORMAT=COMPRESSED page, zerofill the uncompressed page frame.
-
Marko Mäkelä authored
This essentially reverts commit b393e2cb. The leak might have been fixed, but because the DEBUG_SYNC instrumentation for InnoDB purge threads was reverted in 10.5 commit 5e62b6a5 as part of introducing a thread pool, it is easiest to revert the entire change.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Alexander Barkov authored
-
Alexander Barkov authored
-
Alexander Barkov authored
TRUNCATE(decimal_5_5) erroneously tried to create a DECIMAL(0,0) column. Creating a DECIMAL(1,0) column instead.
-
Krunal Bauskar authored
- There are multiple inconsistency and incorrect way in which rw-lock stats are calculated. - shared rw-lock stats: "rounds" counter is incremented only once for N rounds done in spin-cycle. - all rw-lock stats: If the spin-cycle is short-circuited then attempts are re-counted. [If spin-cycle is interrupted, before it completes srv_n_spin_wait_rounds (default 30) rounds, spin_count is incremented to consider this. If thread resumes spin-cycle (due to unavailability of the locks) and is again interrupted or completed, spin_count is again incremented with the total count, failing to adjust the previous attempt increment]. - s/x rw-lock stats: spin_loop counter is not incremented at-all instead it is projected as 0 (in show engine output) and division to calculate spin-round per spin-loop is adjusted. As per the original semantics spin_loop counter should be incremented once per spin_loop execution. - sx rw-lock stats: sx locks increments spin_loop counter but instead of incrementing it once for a spin_loop invocation it does it multiple times based on how many time spin_loop flow is repeated for same instance post os-wait.
-
Alexander Barkov authored
-
Alexander Barkov authored
-
Alexander Barkov authored
MDEV-22503 MDB limits DECIMAL column precision to 16 doing CTAS with floor/ceil over DECIMAL(X,Y) where X > 16 The DECIMAL data type branch in Item_func_int_val::fix_length_and_dec() incorrectly used DOUBLE-style length calculation, which resulted in a smaller data type than the actual result of FLOOR()/CEIL() needs.
-
- 13 May, 2020 3 commits
-
-
Monty authored
MDEV-19622 Assertion failures in ha_partition::set_auto_increment_if_higher upon UPDATE on Aria table
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
- 12 May, 2020 2 commits
-
-
Vlad Lesin authored
Flush LSN to system tablespace on innodb shutdown if XA is rolled back by mariabackup.
-
Marko Mäkelä authored
innobase_get_charset(), innobase_get_stmt_safe(): Remove. It is more efficient and readable to invoke thd_charset() and thd_query_safe() directly, without a non-inlined wrapper function.
-