- 18 May, 2020 2 commits
-
-
Jan Lindström authored
Enable tests with additional galera output to find out actual reason for test failures.
-
Julius Goryavsky authored
The problem is caused by the operation of netcat streamer and does not appear on systems where socat is installed. We need to add the "-N" option for netcat to call shutdown() on the socket when receiving EOF from STDIN.
-
- 17 May, 2020 1 commit
-
-
Varun Gupta authored
The issue here is that end_of_file for encrypted temporary IO_CACHE (used by filesort) is updated using lseek. Encryption adds storage overhead and hides it from the caller by recalculating offsets and lengths. Two different IO_CACHE cannot possibly modify the same file because the encryption key is randomly generated and stored in the IO_CACHE. So when the tempfiles are encrypted DO NOT use lseek to change end_of_file. Further observations about updating end_of_file using lseek 1) The end_of_file update is only used for binlog index files 2) The whole point is to update file length when the file was modified via a different file descriptor. 3) The temporary IO_CACHE files can never be modified via a different file descriptor. 4) For encrypted temporary IO_CACHE, end_of_file should not be updated with lseek
-
- 16 May, 2020 5 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
We will expose some more std::atomic internals in Atomic_counter, so that dict_index_t::lock will support the default assignment operator.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
- 15 May, 2020 17 commits
-
-
Marko Mäkelä authored
In commit b1742a5c we forgot FLUSH TABLES, potentially causing errors for MyISAM system tables.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
The rw_lock_stats were incorrectly updated. While global statistics have limited usefulness, we cannot remove them from a GA version. This contribution is slightly improving performance in write workloads.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
If the InnoDB buffer pool contains many pages for a table or index that is being dropped or rebuilt, and if many of such pages are pointed to by the adaptive hash index, dropping the adaptive hash index may consume a lot of time. The time-consuming operation of dropping the adaptive hash index entries is being executed while the InnoDB data dictionary cache dict_sys is exclusively locked. It is not actually necessary to drop all adaptive hash index entries at the time a table or index is being dropped or rebuilt. We can let the LRU replacement policy of the buffer pool take care of this gradually. For this to work, we must detach the dict_table_t and dict_index_t objects from the main dict_sys cache, and once the last adaptive hash index entry for the detached table is removed (when the garbage page is evicted from the buffer pool) we can free the dict_table_t and dict_index_t object. Related to this, in MDEV-16283, we made ALTER TABLE...DISCARD TABLESPACE skip both the buffer pool eviction and the drop of the adaptive hash index. We shifted the burden to ALTER TABLE...IMPORT TABLESPACE or DROP TABLE. We can remove the eviction from DROP TABLE. We must retain the eviction in the ALTER TABLE...IMPORT TABLESPACE code path, so that in case the discarded table is being re-imported with the same tablespace identifier, the fresh data from the imported tablespace will replace any stale pages in the buffer pool. rpl.rpl_failed_drop_tbl_binlog: Remove the test. DROP TABLE can no longer be interrupted inside InnoDB. fseg_free_page(), fseg_free_step(), fseg_free_step_not_header(), fseg_free_page_low(), fseg_free_extent(): Remove the parameter that specifies whether the adaptive hash index should be dropped. btr_search_lazy_free(): Lazily free an index when the last reference to it is dropped from the adaptive hash index. buf_pool_clear_hash_index(): Declare static, and move to the same compilation unit with the bulk of the adaptive hash index code. dict_index_t::clone(), dict_index_t::clone_if_needed(): Clone an index that is being rebuilt while adaptive hash index entries exist. The original index will be inserted into dict_table_t::freed_indexes and dict_index_t::set_freed() will be called. dict_index_t::set_freed(), dict_index_t::freed(): Note that or check whether the index has been freed. We will use the impossible page number 1 to denote this condition. dict_index_t::n_ahi_pages(): Replaces btr_search_info_get_ref_count(). dict_index_t::detach_columns(): Move the assignment n_fields=0 to ha_innobase_inplace_ctx::clear_added_indexes(). We must have access to the columns when freeing the adaptive hash index. Note: dict_table_t::v_cols[] will remain valid. If virtual columns are dropped or added, the table definition will be reloaded in ha_innobase::commit_inplace_alter_table(). buf_page_mtr_lock(): Drop a stale adaptive hash index if needed. We will also reduce the number of btr_get_search_latch() calls and enclose some more code inside #ifdef BTR_CUR_HASH_ADAPT in order to benefit cmake -DWITH_INNODB_AHI=OFF.
-
Marko Mäkelä authored
When neither MSAN nor Valgrind are enabled, declare Field::mark_unused_memory_as_defined() as an empty inline function, instead of declaring it as a virtual function.
-
Eugene Kosov authored
-
Aleksey Midenkov authored
Same array instance in two Item_func_in instances. First Item_func_in instance is freed on table close. Second one is freed on cleanup_after_query(). get_copy() depends on copy ctor for copying an item and hence does shallow copy for default copy ctor. Use build_clone() for deep copy of Item_func_in.
-
Monty authored
-
Monty authored
This allows us to remove our own declarations of functions and structures that are declared in the history.h file.
-
Monty authored
Most of the volations came from: sel_arg_range_seq_next(void*, st_key_multi_range*) (opt_range_mrr.cc:342)
-
Monty authored
MDEV-22073 MSAN use-of-uninitialized-value in collect_statistics_for_table() Other things: innodb.analyze_table was changed to mainly test statistic collection. Was discussed with Marko.
-
Varun Gupta authored
The issue here was that when the schema was changed the value for the THD::server_status is ored with SERVER_SESSION_STATE_CHANGED. For custom aggregate functions, currently we check if the server_status is equal to SERVER_STATUS_LAST_ROW_SENT then we should terminate the execution of the custom aggregate function as there are no more rows to fetch. So the check should be that if the server status has the bit set for SERVER_STATUS_LAST_ROW_SENT then we should terminate the execution of the custom aggregate function.
-
Monty authored
Other things: - Removed innodb_encryption_tables.test from valgrind as it takes a REALLY long time
-
Jan Lindström authored
Problem was that trx->lock.was_chosen_as_wsrep_victim variable was not set back to false after it was set true. wsrep_thd_bf_abort Add assertions for correct mutex status and take necessary mutexes before calling thd->awake_no_mutex(). innobase_rollback_trx() Reset trx->lock.was_chosen_as_wsrep_victim wsrep_abort_slave_trx() Removed unused function. wsrep_innobase_kill_one_trx() Added function comment, removed unnecessary parameters and added debug assertions to enforce correct usage. Added more debug output to help out on error analysis. wsrep_abort_transaction() Added debug assertions and removed unused variables. trx0trx.h Removed assert_trx_is_free macro and replaced it with assert_freed() member function. trx_create() Use above assert_free() and initialize wsrep variables. trx_free() Use assert_free() trx_t::commit_in_memory() Reset lock.was_chosen_as_wsrep_victim trx_rollback_for_mysql() Reset trx->lock.was_chosen_as_wsrep_victim Add test case galera_bf_kill
-
Alexander Barkov authored
fix_fields_for_tvc() could call fix_fields() for Items that have already been fixed before. Changing fix_fields() to fix_fields_if_needed().
-
- 14 May, 2020 15 commits
-
-
Varun Gupta authored
For the case when the optimizer does the IN-EXISTS transformation, the equality condition is injected in the WHERE OR HAVING clause of the subquery. If the select list of the subquery has a reference to the parent select make sure to use the reference and not the original item.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
On a checksum failure of a ROW_FORMAT=COMPRESSED page, buf_LRU_free_one_page() would invoke buf_LRU_block_remove_hashed() which will read the uncompressed page frame, although it would not be initialized. With bad enough luck, fil_page_get_type(page) could return an unrecognized value and cause the server to abort. buf_page_io_complete(): On the corruption of a ROW_FORMAT=COMPRESSED page, zerofill the uncompressed page frame.
-
Vladislav Vaintroub authored
When server is compiled with recent VS2019, then executables, have dependency on vcruntime140_1.dll While we include the VC redistributable merge modules into our MSI package, those merge modules were stale (taken from older VS version, 2017) Since VS2019 brough new DLL dependency by introducing new exception handling https://devblogs.microsoft.com/cppblog/making-cpp-exception-handling-smaller-x64 thus the old MSMs were not enough. The fix is to change logic in win/packaging/CMakeLists.txt to look up for the correct, new MSMs. The bug only affects 10.4,as we compile with static CRT before 10.4, and partly-statically(just vcruntime stub is statically linked, but not UCRT) after 10.4 For the fix to work, it required also some changes on the build machine (vs_installer, modify VS2019 installation, add Individual Component "C++ 2019 Redistributable MSMs")
-
Marko Mäkelä authored
This essentially reverts commit b393e2cb. The leak might have been fixed, but because the DEBUG_SYNC instrumentation for InnoDB purge threads was reverted in 10.5 commit 5e62b6a5 as part of introducing a thread pool, it is easiest to revert the entire change.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Alexander Barkov authored
-
Alexander Barkov authored
-
Alexander Barkov authored
TRUNCATE(decimal_5_5) erroneously tried to create a DECIMAL(0,0) column. Creating a DECIMAL(1,0) column instead.
-
Krunal Bauskar authored
- There are multiple inconsistency and incorrect way in which rw-lock stats are calculated. - shared rw-lock stats: "rounds" counter is incremented only once for N rounds done in spin-cycle. - all rw-lock stats: If the spin-cycle is short-circuited then attempts are re-counted. [If spin-cycle is interrupted, before it completes srv_n_spin_wait_rounds (default 30) rounds, spin_count is incremented to consider this. If thread resumes spin-cycle (due to unavailability of the locks) and is again interrupted or completed, spin_count is again incremented with the total count, failing to adjust the previous attempt increment]. - s/x rw-lock stats: spin_loop counter is not incremented at-all instead it is projected as 0 (in show engine output) and division to calculate spin-round per spin-loop is adjusted. As per the original semantics spin_loop counter should be incremented once per spin_loop execution. - sx rw-lock stats: sx locks increments spin_loop counter but instead of incrementing it once for a spin_loop invocation it does it multiple times based on how many time spin_loop flow is repeated for same instance post os-wait.
-
Alexander Barkov authored
-
Alexander Barkov authored
-