- 26 May, 2020 1 commit
-
-
Thirunarayanan Balathandayuthapani authored
- During column reorder table rebuild, rollback of insert fails. Reason is that InnoDB tries to fetch the column position from new clustered index and it exceeds default column value tuple fields. So InnoDB should use the table column position while searching for defaults column value.
-
- 25 May, 2020 4 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
There was a race condition where the connection of the victim of a KILL statement is disconnected while the KILL statement is executing. As a side effect of this fix, we will make XA PREPARE transactions immune to KILL statements. Starting with MariaDB 10.2, we have a pool of trx_t objects. trx_free() would only free memory to the pool. We poison the contents of freed objects in the pool in order to catch misuse. trx_free(): Unpoison also trx->mysql_thd and trx->state. This is to counter the poisoning of *trx in trx_pools->mem_free(). Unpoison only on AddressSanitizer or Valgrind, but not on MemorySanitizer. Pool: Unpoison allocated objects only on AddressSanitizer or Valgrind, but not on MemorySanitizer. innobase_kill_query(): Properly protect trx, acquiring also trx_sys_t::mutex and checking trx_t::mysql_thd and trx_t::state.
-
Marko Mäkelä authored
commit cf52dd17 failed to adjust the result of the test main.mysqldump.
-
Varun Gupta authored
A temporary table is needed for window function computation but if only a NAMED WINDOW SPEC is used and there is no window function, then there is no need to create a temporary table as there is no stage to compute WINDOW FUNCTION
-
- 24 May, 2020 1 commit
-
-
Oleksandr Byelkin authored
Added parameter %T for string which should be visibly truncated.
-
- 23 May, 2020 2 commits
-
-
Monty authored
-
Monty authored
MDEV-21398 Deadlock (server hang) or assertion failure in Diagnostics_area::set_error_status upon ALTER under lock This failure could only happen if one locked the same table multiple times and then did an ALTER TABLE on the table. Major change is to change all instances of table->m_needs_reopen= true; to table->mark_table_for_reopen(); The main fix for the problem was to ensure that we mark all instances of the table in the locked_table_list and when we reopen the tables, we first close all tables before reopening and locking them. Other things: - Don't call thd->locked_tables_list.reopen_tables if there are no tables marked for reopen. (performance)
-
- 22 May, 2020 4 commits
-
-
Monty authored
MDEV-22002 Assertion `!is_set() || (m_status == DA_OK_BULK && is_bulk_op())' failed upon CREATE TEMPORARY SEQUENCE under XA
-
Alexander Barkov authored
MDEV-22111 ERROR 1064 & 1033 and SIGSEGV on CREATE TABLE w/ various charsets on 10.4/5 optimized builds | Assertion `(uint) (table_check_constraints - share->check_constraints) == (uint) (share->table_check_constraints - share->field_check_constraints)' failed Additional 10.2 specific tests (with JSON)
-
Alexander Barkov authored
-
Alexander Barkov authored
MDEV-22111 ERROR 1064 & 1033 and SIGSEGV on CREATE TABLE w/ various charsets on 10.4/5 optimized builds | Assertion `(uint) (table_check_constraints - share->check_constraints) == (uint) (share->table_check_constraints - share->field_check_constraints)' failed The code incorrectly assumed in multiple places that TYPELIB values cannot have 0x00 bytes inside. In fact they can: CREATE TABLE t1 (a ENUM(0x61, 0x0062) CHARACTER SET BINARY); Note, the TYPELIB value encoding used in FRM is ambiguous about 0x00. So this fix is partial. It fixes 0x00 bytes in many (but not all) places: - In the middle or in the end of a value: CREATE TABLE t1 (a ENUM(0x6100) ...); CREATE TABLE t1 (a ENUM(0x610062) ...); - In the beginning of the first value: CREATE TABLE t1 (a ENUM(0x0061)); CREATE TABLE t1 (a ENUM(0x0061), b ENUM('b')); - In the beginning of the second (and following) value of the *last* ENUM/SET in the table: CREATE TABLE t1 (a ENUM('a',0x0061)); CREATE TABLE t1 (a ENUM('a'), b ENUM('b',0x0061)); However, it does not fix 0x00 when: - 0x00 byte is in the beginning of a value of a non-last ENUM/SET causes an error: CREATE TABLE t1 (a ENUM('a',0x0061), b ENUM('b')); ERROR 1033 (HY000): Incorrect information in file: './test/t1.frm' This is an ambuguous case and will be fixed separately. We need a new TYPELIB encoding to fix this. Details: - unireg.cc The function pack_header() incorrectly used strlen() to detect a TYPELIB value length. Adding a new function typelib_values_packed_length() which uses TYPELIB::type_lengths[n] to detect the n-th value length, and reusing the new function in pack_header() and packed_fields_length() - table.cc fix_type_pointers() assumed in multiple places that values cannot have 0x00 inside and used strlen(TYPELIB::type_names[n]) to set the corresponding TYPELIB::type_lengths[n]. Also, fix_type_pointers() did not check the encoded data for consistency. Rewriting fix_type_pointers() code to populate TYPELIB::type_names[n] and TYPELIB::type_lengths[n] at the same time, so no additional loop with strlen() is needed any more. Adding many data consistency tests. Fixing the main loop in fix_type_pointers() to use memchr() instead of strchr() to handle 0x00 properly. Fixing create_key_infos() to return the result in a LEX_STRING rather that in a char*.
-
- 20 May, 2020 16 commits
-
-
Sujatha authored
MDEV-22451: SIGSEGV in __memmove_avx_unaligned_erms/memcpy from _my_b_write on CREATE after RESET MASTER Merge branch '10.2' into 10.3
-
Sujatha authored
MDEV-22451: SIGSEGV in __memmove_avx_unaligned_erms/memcpy from _my_b_write on CREATE after RESET MASTER Merge branch '10.1' into 10.2
-
Marko Mäkelä authored
-
Marko Mäkelä authored
ins_node_create_entry_list(): Create dummy empty tuples for corrupted or incomplete indexes, to avoid dereferencing a NULL dict_field_t::col pointer in row_build_index_entry_low(). This issue was found by a crash in the test gcol.innodb_virtual_basic when merging the fix to 10.5.
-
Sujatha authored
MDEV-22451: SIGSEGV in __memmove_avx_unaligned_erms/memcpy from _my_b_write on CREATE after RESET MASTER Analysis: ======== RESET MASTER TO # command deletes all binary log files listed in the index file, resets the binary log index file to be empty, and creates a new binary log with number #. When the user provided binary log number is greater than the max allowed value '2147483647' server fails to generate a new binary log. The RESET MASTER statement marks the binlog closure status as 'LOG_CLOSE_TO_BE_OPENED' and exits. Statements which follow RESET MASTER try to write to binary log they find the log_state != LOG_CLOSED and proceed to write to binary log cache and it results in crash. Fix: === During MYSQL_BIN_LOG open, if generation of new binary log name fails then the "log_state" needs to be marked as "LOG_CLOSED". With this further statements will find binary log as closed and they will skip writing to the binary log.
-
Rasmus Johansson authored
-
Rasmus Johansson authored
-
Rasmus Johansson authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Thirunarayanan Balathandayuthapani authored
prepare_inplace_alter_table_dict(): In the error handling, relax a debug assertion for the case that we did not execute dict_stats_wait_bg_to_stop_using_table() yet.
-
Alexander Barkov authored
Problem: When handling a query like this: VALUES ('') UNION SELECT _utf16 0x0020 COLLATE utf16_bin; Type_handler_string_result::Item_hybrid_func_fix_attributes() tried to apply character set conversion Item_type_holder, which causes a crash on DBUG_ASSERT(0) inside Item_type_holder::val_str(). Fix: Overriding Item_type_holder's methods to avoid this, as follows: bool const_item() const { return false; } bool is_expensive() { return true; }
-
Marko Mäkelä authored
For no good reason, innodb_encryption_threads was limited to 4,294,967,295. Expectedly, the server would crash if such an insane value was specified. Let us limit the maximum to 255. The encryption threads are not doing much useful work. They are basically only dirtying pages by performing dummy writes via the redo log. The encryption key rotation or the in-place addition or removal of encryption will take place in the page cleaner. In a quick test on a 20-core CPU (40 threads in total), the sweet spot on an otherwise idle server seemed to be innodb_encryption_threads=16 for the test encryption.encrypt_and_grep. The new limit 255 should be more than enough for even bigger servers.
-
Yury Kurlykov authored
fts_indexes field in fts_update_t never used. So replace fts_update_t with doc_id_t in the code
-
Marko Mäkelä authored
-
Jan Lindström authored
MDEV-18838 : galera.galera_toi_truncate: Test failure: mysqltest: query 'reap' succeeded - should have failed with errno 1213 Test cleanup.
-
- 19 May, 2020 12 commits
-
-
Andrei Elkin authored
This is a new test from upstream that did not expect the correct value of the command slot of the Dump thread when the latter gets killed. The test is made to expect "Killed" string as the command in show-processlist as it is supposed to when a thread gets killed.
-
Rasmus Johansson authored
-
Rasmus Johansson authored
-
Rasmus Johansson authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
dict_index_remove_from_cache_low(): Add a missing #ifdef around dict_index_t::freed().
-
Marko Mäkelä authored
This is a regression due to MDEV-16376 commit 8dc70c86. To make dict_index_t::detach_columns() idempotent, we cleared dict_index_t::n_fields. But, this could cause trouble with purge after a secondary index creation failed (not even involving virtual columns). A better way is to clear the dict_field_t::col pointers that point to virtual columns that are being freed due to aborting index creation on an index that depends on a virtual column. Note: the v_cols[] of an existing dict_table_t object will never be modified. If any virtual columns are added or removed, ha_innobase::commit_inplace_alter_table() would invoke dict_table_remove_from_cache() and reload the table to dict_sys. Index creation is a special case where the dict_index_t points to virtual columns that do not yet exist in dict_table_t.
-
Marko Mäkelä authored
This is a regression due to the cleanup commit 12f804ac. row_sel_open_pcur(): Remove the unnecessary parameter. It suffices for us to acquire the adaptive hash index latch only when btr_search_guess_on_hash() is called by btr_cur_search_to_nth_level_func(), in btr_pcur_open_with_no_init(). This code seems to be a relic from the times when there was only one btr_search_latch, which was held in shared mode for longer periods of time. Another relic of that era was removed in commit e5980bf1. This clean-up was missed when the btr_search_latch was split in mysql/mysql-server/commit@ab17ab91ce18a47bb6c5c49e4dc0505ad488a448 (MySQL 5.7.8).
-
Rasmus Johansson authored
-
Monty authored
-
Alexander Barkov authored
Removing a wrong DBUG_ASSERT: When Item_param gets "unfixed" in cleanup(), its "fixed" gets assigned to false, while item_item keeps the value. So the assert was wrong. Perhaps, instead of removing the assert, it was possible to reset item_type to NO_VALUE in cleanup. But this is not very important: it's implemented in 10.4 in a better way: Item_param::is_fixed() always returns true and it does not need to be "unfixed".
-
Vlad Lesin authored
error is logged The fix is to set flag in ib::error::~error() and check it in mariabackup. ib::error::error() is replaced with ib::warn::warn() in AIO::linux_create_io_ctx() because of two reasons: 1) if we leave it as is, then mariabackup MTR tests will fail with --mem option, because Linux AIO can not be used on tmpfs, 2) when Linux AIO can not be initialized, InnoDB falls back to simulated AIO, so such sutiation is not fatal error, it should be treated as warning.
-