- 20 May, 2020 11 commits
-
-
Sujatha authored
MDEV-22451: SIGSEGV in __memmove_avx_unaligned_erms/memcpy from _my_b_write on CREATE after RESET MASTER Merge branch '10.1' into 10.2
-
Marko Mäkelä authored
ins_node_create_entry_list(): Create dummy empty tuples for corrupted or incomplete indexes, to avoid dereferencing a NULL dict_field_t::col pointer in row_build_index_entry_low(). This issue was found by a crash in the test gcol.innodb_virtual_basic when merging the fix to 10.5.
-
Sujatha authored
MDEV-22451: SIGSEGV in __memmove_avx_unaligned_erms/memcpy from _my_b_write on CREATE after RESET MASTER Analysis: ======== RESET MASTER TO # command deletes all binary log files listed in the index file, resets the binary log index file to be empty, and creates a new binary log with number #. When the user provided binary log number is greater than the max allowed value '2147483647' server fails to generate a new binary log. The RESET MASTER statement marks the binlog closure status as 'LOG_CLOSE_TO_BE_OPENED' and exits. Statements which follow RESET MASTER try to write to binary log they find the log_state != LOG_CLOSED and proceed to write to binary log cache and it results in crash. Fix: === During MYSQL_BIN_LOG open, if generation of new binary log name fails then the "log_state" needs to be marked as "LOG_CLOSED". With this further statements will find binary log as closed and they will skip writing to the binary log.
-
Rasmus Johansson authored
-
Rasmus Johansson authored
-
Marko Mäkelä authored
-
Thirunarayanan Balathandayuthapani authored
prepare_inplace_alter_table_dict(): In the error handling, relax a debug assertion for the case that we did not execute dict_stats_wait_bg_to_stop_using_table() yet.
-
Marko Mäkelä authored
For no good reason, innodb_encryption_threads was limited to 4,294,967,295. Expectedly, the server would crash if such an insane value was specified. Let us limit the maximum to 255. The encryption threads are not doing much useful work. They are basically only dirtying pages by performing dummy writes via the redo log. The encryption key rotation or the in-place addition or removal of encryption will take place in the page cleaner. In a quick test on a 20-core CPU (40 threads in total), the sweet spot on an otherwise idle server seemed to be innodb_encryption_threads=16 for the test encryption.encrypt_and_grep. The new limit 255 should be more than enough for even bigger servers.
-
Yury Kurlykov authored
fts_indexes field in fts_update_t never used. So replace fts_update_t with doc_id_t in the code
-
Marko Mäkelä authored
-
Jan Lindström authored
MDEV-18838 : galera.galera_toi_truncate: Test failure: mysqltest: query 'reap' succeeded - should have failed with errno 1213 Test cleanup.
-
- 19 May, 2020 12 commits
-
-
Andrei Elkin authored
This is a new test from upstream that did not expect the correct value of the command slot of the Dump thread when the latter gets killed. The test is made to expect "Killed" string as the command in show-processlist as it is supposed to when a thread gets killed.
-
Rasmus Johansson authored
-
Rasmus Johansson authored
-
Marko Mäkelä authored
dict_index_remove_from_cache_low(): Add a missing #ifdef around dict_index_t::freed().
-
Marko Mäkelä authored
This is a regression due to MDEV-16376 commit 8dc70c86. To make dict_index_t::detach_columns() idempotent, we cleared dict_index_t::n_fields. But, this could cause trouble with purge after a secondary index creation failed (not even involving virtual columns). A better way is to clear the dict_field_t::col pointers that point to virtual columns that are being freed due to aborting index creation on an index that depends on a virtual column. Note: the v_cols[] of an existing dict_table_t object will never be modified. If any virtual columns are added or removed, ha_innobase::commit_inplace_alter_table() would invoke dict_table_remove_from_cache() and reload the table to dict_sys. Index creation is a special case where the dict_index_t points to virtual columns that do not yet exist in dict_table_t.
-
Rasmus Johansson authored
-
Monty authored
-
Alexander Barkov authored
Removing a wrong DBUG_ASSERT: When Item_param gets "unfixed" in cleanup(), its "fixed" gets assigned to false, while item_item keeps the value. So the assert was wrong. Perhaps, instead of removing the assert, it was possible to reset item_type to NO_VALUE in cleanup. But this is not very important: it's implemented in 10.4 in a better way: Item_param::is_fixed() always returns true and it does not need to be "unfixed".
-
Vlad Lesin authored
error is logged The fix is to set flag in ib::error::~error() and check it in mariabackup. ib::error::error() is replaced with ib::warn::warn() in AIO::linux_create_io_ctx() because of two reasons: 1) if we leave it as is, then mariabackup MTR tests will fail with --mem option, because Linux AIO can not be used on tmpfs, 2) when Linux AIO can not be initialized, InnoDB falls back to simulated AIO, so such sutiation is not fatal error, it should be treated as warning.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
lock_table_locks_lookup(): Relax the assertion. Locks must not exist while online secondary index creation is in progress. However, if CREATE UNIQUE INDEX has not been committed yet, but the index creation has been completed, concurrent DML transactions may acquire record locks on the index. Furthermore, such concurrent DML may cause duplicate key violation, causing the DDL operation to be rolled back. After that, the online_status may be ONLINE_INDEX_ABORTED or ONLINE_INDEX_ABORTED_DROPPED. So, the debug assertion may only forbid the state ONLINE_INDEX_CREATION.
-
Monty authored
MDEV-18457 Assertion `(bitmap->map + (bitmap->full_head_size/6*6)) <= full_head_end failed The problem was that full_head_size was not calculated correctly in the case when insert_order was inforced, which is the case for SHOW commands.
-
- 18 May, 2020 5 commits
-
-
Andrei Elkin authored
The assert was caused by early cleanup of a user variable participant in BINLOG @var,@var where it plays twice. That was unexpected by the base code to clear its value prematurely. Fixed with relocating the user var destruction after operations with its value is over.
-
Marko Mäkelä authored
In commit ad6171b9 (MDEV-22456) we removed the acquisition of the adaptive hash index latch from the caller of btr_search_update_hash_ref(). The tests innodb.innodb_buffer_pool_resize_with_chunks and innodb.innodb_buffer_pool_resize would occasionally fail starting with 10.3, due to MDEV-12288 causing more purge activity during the test. btr_search_update_hash_ref(): After acquiring the adaptive hash index latch, check that the adaptive hash index is still enabled on the page.
-
Julius Goryavsky authored
The problem is caused by the operation of netcat streamer and does not appear on systems where socat is installed. We need to add the "-N" option for netcat to call shutdown() on the socket when receiving EOF from STDIN.
-
Marko Mäkelä authored
-
Daniel Black authored
Localhost, depending on the platform can return any 127.0.0.1/8 address.
-
- 17 May, 2020 3 commits
-
-
Otto Kekäläinen authored
Also clean away dead code that is not used and will never have any use on the 10.2 branch.
-
Otto Kekäläinen authored
There is a 4 MB hard limit on Travis-CI and build output needs to be less than that. Silencing the 'make install' step gets rid of a lot of "Installing.." and "Missing.." and removing all mysql-test files will make the dh_missing warnings much shorter. Cherry-picked from 41952c85.
-
Otto Kekäläinen authored
Backported from 30b44aae.
-
- 16 May, 2020 1 commit
-
-
Alexander Barkov authored
Also, adding 10.2 related changes for MDEV-22579
-
- 15 May, 2020 8 commits
-
-
Marko Mäkelä authored
In commit b1742a5c we forgot FLUSH TABLES, potentially causing errors for MyISAM system tables.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
The rw_lock_stats were incorrectly updated. While global statistics have limited usefulness, we cannot remove them from a GA version. This contribution is slightly improving performance in write workloads.
-
Alexander Barkov authored
The code erroneously allowed both: INSERT INTO t1 (vcol) VALUES (DEFAULT); INSERT INTO t1 (vcol) VALUES (DEFAULT(non_virtual_column)); The former is OK, but the latter is not. Adding a new virtual method in Item: virtual bool vcol_assignment_allowed_value() const { return false; } Item_null, Item_param and Item_default_value override it. Item_default_value overrides it in the way to: - allow DEFAULT - disallow DEFAULT(col)
-
Marko Mäkelä authored
If the InnoDB buffer pool contains many pages for a table or index that is being dropped or rebuilt, and if many of such pages are pointed to by the adaptive hash index, dropping the adaptive hash index may consume a lot of time. The time-consuming operation of dropping the adaptive hash index entries is being executed while the InnoDB data dictionary cache dict_sys is exclusively locked. It is not actually necessary to drop all adaptive hash index entries at the time a table or index is being dropped or rebuilt. We can let the LRU replacement policy of the buffer pool take care of this gradually. For this to work, we must detach the dict_table_t and dict_index_t objects from the main dict_sys cache, and once the last adaptive hash index entry for the detached table is removed (when the garbage page is evicted from the buffer pool) we can free the dict_table_t and dict_index_t object. Related to this, in MDEV-16283, we made ALTER TABLE...DISCARD TABLESPACE skip both the buffer pool eviction and the drop of the adaptive hash index. We shifted the burden to ALTER TABLE...IMPORT TABLESPACE or DROP TABLE. We can remove the eviction from DROP TABLE. We must retain the eviction in the ALTER TABLE...IMPORT TABLESPACE code path, so that in case the discarded table is being re-imported with the same tablespace identifier, the fresh data from the imported tablespace will replace any stale pages in the buffer pool. rpl.rpl_failed_drop_tbl_binlog: Remove the test. DROP TABLE can no longer be interrupted inside InnoDB. fseg_free_page(), fseg_free_step(), fseg_free_step_not_header(), fseg_free_page_low(), fseg_free_extent(): Remove the parameter that specifies whether the adaptive hash index should be dropped. btr_search_lazy_free(): Lazily free an index when the last reference to it is dropped from the adaptive hash index. buf_pool_clear_hash_index(): Declare static, and move to the same compilation unit with the bulk of the adaptive hash index code. dict_index_t::clone(), dict_index_t::clone_if_needed(): Clone an index that is being rebuilt while adaptive hash index entries exist. The original index will be inserted into dict_table_t::freed_indexes and dict_index_t::set_freed() will be called. dict_index_t::set_freed(), dict_index_t::freed(): Note that or check whether the index has been freed. We will use the impossible page number 1 to denote this condition. dict_index_t::n_ahi_pages(): Replaces btr_search_info_get_ref_count(). dict_index_t::detach_columns(): Move the assignment n_fields=0 to ha_innobase_inplace_ctx::clear_added_indexes(). We must have access to the columns when freeing the adaptive hash index. Note: dict_table_t::v_cols[] will remain valid. If virtual columns are dropped or added, the table definition will be reloaded in ha_innobase::commit_inplace_alter_table(). buf_page_mtr_lock(): Drop a stale adaptive hash index if needed. We will also reduce the number of btr_get_search_latch() calls and enclose some more code inside #ifdef BTR_CUR_HASH_ADAPT in order to benefit cmake -DWITH_INNODB_AHI=OFF.
-
Marko Mäkelä authored
When neither MSAN nor Valgrind are enabled, declare Field::mark_unused_memory_as_defined() as an empty inline function, instead of declaring it as a virtual function.
-
Eugene Kosov authored
-
Monty authored
MDEV-22073 MSAN use-of-uninitialized-value in collect_statistics_for_table() Other things: innodb.analyze_table was changed to mainly test statistic collection. Was discussed with Marko.
-