- 16 May, 2018 11 commits
-
-
Thirunarayanan Balathandayuthapani authored
Imported the following test case from mysql to MariaDB 1) innodb.alter_kill 2) innodb.alter_foreign_crash 3) innodb.alter_rename_files 4) innodb.analyze_table 5) Appended the case in innodb-online-alter-gis
-
Shaohua Wang authored
Problem: We keep pinning pages in dict_stats_analyze_index_below_cur(), but doesn't release these pages. When we have a relative small buffer pool size, and big innodb_stats_persistent_sample_pages, there will be no free pages for use. Solution: Use a separate mtr in dict_stats_analyze_index_below_cur(), and commit mtr before return. Reviewed-by: Jimmy Yang <jimmy.yang@oracle.com> RB: 11362
-
Thirunarayanan Balathandayuthapani authored
-
Marko Mäkelä authored
In debug builds of MySQL, there is an configuration variable that allows an InnoDB log checkpoint to be initiated: SET GLOBAL innodb_log_checkpoint_now=ON; Setting this variable while a table-rebuilding ALTER TABLE is executing may result in an infinite loop. checkpoint_now_set(): Account for log_sys->append_on_checkpoint->size(). Note that this function contains race conditions, because it is accessing fields of log_sys without holding log_sys->mutex. We think that this is acceptable, because this variable only exists for debugging purposes, in debug builds of MySQL. RB: 9947 Reviewed-by: Sunny Bains <sunny.bains@oracle.com>
-
Thirunarayanan Balathandayuthapani authored
to innodb.innodb-online-alter-gis
-
Annamalai Gurusami authored
Problem: The function row_build_index_entry_low() takes a dtuple_t object ('row') and dict_index_t object ('index') as input and returns a new dtuple_t object ('entry') as output. The dtuple_t object 'row' that is given as input might have been constructed from a different dict_index_t object (!= index). So when accessing the externally stored data of the given 'row' we need to make use of the correct index object. Solution: Store the page size information in the associated row_ext_t object. rb#6086 approved by Vasil and Jimmy.
-
Thirunarayanan Balathandayuthapani authored
-
Annamalai Gurusami authored
Problem: Suppose there are two tables in the database related through a foreign key constraint - the parent table and the child table. Suppose there is an ALTER TABLE on the child table which gets interrupted in such a way that the child table is not available any more. After crash recovery, an ALTER TABLE on the parent table identifies that its foreign keys cannot be loaded. This results in an error and a debug assert. Solution: Remove the debug assert and change error to a warning. rb#8658 approved by Marko.
-
Thirunarayanan Balathandayuthapani authored
New added test case: alter_kill in innodb suite.
-
Marko Mäkelä authored
The server crashes on a SELECT because of space id mismatch. The mismatch happens if the server crashes during an ALTER TABLE. There are actually two cases of inconsistency, and three fixes needed for the InnoDB problems. We have dictionary data (tablespace or table name) in 3 places: (a) The *.frm file is for the old table definition. (b) The InnoDB data dictionary is for the new table definition. (c) The file system did not rename the tablespace files yet. In this fix, we will not care if the *.frm file is in sync with the InnoDB data dictionary and file system. We will concentrate on the mismatch between (b) and (c). Two scenarios have been mentioned in this bug report. The simpler one first: 1. The changes to SYS_TABLES were committed, and MLOG_FILE_RENAME2 records were written in a single mini-transaction commit. The files were not yet renamed in the file system. 2a. The server is killed, without making a log checkpoint. 3a. The server refuses to start up, because replaying MLOG_FILE_RENAME2 fails. I failed to repeat this myself. I repeated step 3a with a saved dataset. The problem seems to be that MLOG_FILE_RENAME2 replay is incorrectly being skipped when there is no page-redo log or MLOG_FILE_NAME record for the old name of the tablespace. FIX#1: Recover the id-to-name mapping also from MLOG_FILE_RENAME2 records when scanning the redo log. It is not necessary to write MLOG_FILE_NAME records in addition to MLOG_FILE_RENAME2 records for renaming tablespace files. The scenario in the original Description involves a log checkpoint: 1. The changes to SYS_TABLES were committed, and MLOG_FILE_RENAME2 records were written in a single mini-transaction commit. 2. A log checkpoint and a server kill was injected. 3. Crash recovery will see no records (other than the MLOG_CHECKPOINT). 4. dict_check_tablespaces_and_store_max_id() will emit a message about a non-found table #sql-ib22*. 5. A mismatch is triggering the assertion failure. In my test, at step 4 the SYS_TABLES root page (0:8) contains these 3 records right before the page supremum: * delete-marked (committed) name=#sql-ib21* record, with space=10. * name=#sql-ib22*, space=9. * name=t1, space=10. space=10 is the rebuilt table (#sql-ib21*.ibd in the file system). space=9 is the old table (t1.ibd in the file system). The function dict_check_tablespaces_and_store_max_id() will enter t1.ibd with space_id=10 into the fil_system cache without noticing that t1.ibd contains space_id=9, because it invokes fil_open_single_table_tablespace() with validate=false. In MySQL 5.6, the space_id from all *.ibd files are being read when the redo log checkpoint LSN disagrees with the FIL_PAGE_FILE_FLUSH_LSN in the system tablespace. This field is only updated during a clean shutdown, after performing the final log checkpoint. FIX#2: dict_check_tablespaces_and_store_max_id() should pass validate=true to fil_open_single_table_tablespace() when a non-clean shutdown is detected, forcing the first page of each *.ibd file to be read. (We do not want to slow down startup after a normal shutdown.) With FIX#2, the SELECT would fail to find the table. This would introduce a regression, because before WL#7142, a copy of the table was accessible after recovery. FIX#3: Maintain a list of MLOG_FILE_RENAME2 records that have been written to the redo log, but not performed yet in the file system. When performing a checkpoint, re-emit these records to the redo log. In this way, a mismatch between (b) and (c) should be impossible. fil_name_process(): Refactored from fil_name_parse(). Adds an item to the id-to-filename mapping. fil_name_parse(): Parses and applies a MLOG_FILE_NAME, MLOG_FILE_DELETE or MLOG_FILE_RENAME2 record. This implements FIX#1. fil_name_write_rename(): A wrapper function for writing MLOG_FILE_RENAME2 records. fil_op_replay_rename(): Apply MLOG_FILE_RENAME2 records. Replaces fil_op_log_parse_or_replay(), whose logic was moved to fil_name_parse(). fil_tablespace_exists_in_mem(): Return fil_space_t* instead of bool. dict_check_tablespaces_and_store_max_id(): Add the parameter "validate" to implement FIX#2. log_sys->append_on_checkpoint: Extra log records to append in case of a checkpoint. Needed for FIX#3. log_append_on_checkpoint(): New function, to update log_sys->append_on_checkpoint. mtr_write_log(): New function, to append mtr_buf_t to the redo log. fil_names_clear(): Append the data from log_sys->append_on_checkpoint if needed. ha_innobase::commit_inplace_alter_table(): Add any MLOG_FILE_RENAME2 records to log_sys->append_on_checkpoint(), and remove them once the files have been renamed in the file system. mtr_buf_copy_t: A helper functor for copying a mini-transaction log. rb#6282 approved by Jimmy Yang
-
Varun Gupta authored
In this issue we hit the assert because we are adding addition fields to the field JOIN::all_fields list. This is done because HEAP tables can't index BIT fields so we need to use an additional hidden field for grouping because later it will be converted to a LONG field. Original field will remain of the BIT type and will be returned. This happens when we convert DISTINCT to GROUP BY. The solution is to take into account the number of such hidden fields that would be added to the field JOIN::all_fields list while calculating the size of the ref_pointer_array.
-
- 15 May, 2018 12 commits
-
-
Elena Stepanova authored
-
Monty authored
Problem was that verify_constraints() didn't check if there was an error as part of evaluating constraints (can happen in strict mode). In one-row-insert the error was ignored when using binary logging as binary logging clear errors if insert succeeded. In multi-row-insert the error was noticed for the second row. After this fix one will get an error for both one and multi-row inserts if the constraints generates a warning in strict mode.
-
Sergei Golubchik authored
followup for 21bcfeb9
-
Sergei Golubchik authored
-
Sergei Golubchik authored
Part one, non-temporary tables. Rrenaming a column can make destructive changes to the TABLE. This TABLE cannot be used anymore and needs to be reopened even if ALTER TABLE was aborted with an error.
-
Sergei Golubchik authored
MDEV-14750 Valgrind Invalid read, ASAN heap-use-after-free in Item_ident::print upon SHOW CREATE on partitioned table items in the partitioning function were taking the table name from the table's field (in set_field(from_field) in Item_field::fix_fields) and field's table_name is TABLE::alias. But alias is changed for every statement, and can be realloced if next statement uses a longer alias. But partitioning items are fixed once and live as long as the TABLE does. So if an alias is realloced, pointers to the old alias string will become invalid. Fix partitioning item table_name to point to the actual table name instead.
-
Sergei Golubchik authored
rename to post_fix_fields_part_expr_processor() because it's only used after fix_fields in fix_fields_part_func() and can be used for various post-fix_fields fixups
-
Sergei Golubchik authored
set the pointer to NULL to avoid double-free when the item is cleaned up many times (once in JOIN_TAB::cleanup(): tmp->jtbm_subselect->cleanup() and once at the end of the query, with all other items)
-
Sergei Golubchik authored
in `ulonglong=ulong*uint` multiplication is done in ulong, wrapping around on 32bit. This became visible after C/C changed the default charset to utf8, thus changing mbmaxlem from 1 to 3.
-
Sergei Golubchik authored
-
Sergei Petrunia authored
RocksDB now supports "iterator bounds" which are min and max keys that an iterator is interested in. Iterator initialization function doesn't copy the keys, though, it keeps pointers to them. So if the buffer space for the keys is used for another iterator (the one for checking for UNIUQE constraint violation in ha_rocksdb::ha_update_row) then one can get incorrect query result. Fixed by using a separate buffer for iterator bounds in the unique constraint violation check.
-
Marko Mäkelä authored
-
- 14 May, 2018 9 commits
-
-
Jacob Mathew authored
This problem occured because the reorganization of the list of values when the number of elements exceeds 32 was not handled correctly. I have fixed the problem by fixing the way that the list values are reorganized when the number of list values exceeds 32. Author: Jacob Mathew. Reviewer: Alexey Botchkov. Merged From: Commit 8e015986 on branch bb-10.3-MDEV-16101
-
Sergei Petrunia authored
-
Marko Mäkelä authored
This fix was initially missed, because there were multiple commits with an empty message.
-
Marko Mäkelä authored
Merge a test case and a code change from MySQL 5.7.22. There was no commit message, but a test case was included. https://github.com/mysql/mysql-server/commit/d3ec326bcd8352fcd969dbfd54e5c604db9cd30b There is no Bug 25899959 mentioned in the MySQL 8.0.11 history. Based on the number, it should have been filed before August 2017. Maybe it was initially fixed in a not-yet-public MySQL 9.0 branch? The code change differs from MySQL 5.7, because the mbminmaxlen were split in MariaDB in MDEV-7049.
-
Sergei Petrunia authored
MariaDB has a scaled-down version of the test so we need to set @@rocksdb_max_row_locks lower to trigger the desired error (didn't catch this on test BB run because this test is marked as "big")
-
Sergei Petrunia authored
Bloom filter is only used when reading the data from disk. If the data happens to be still in the memtable, bloomfilter wont be used. Stabilize the testcase by making sure the data is on disk before we read it.
-
Marko Mäkelä authored
trx_purge_stop(): Release purge_sys->latch before attempting to wake up the purge threads, so that they can actually wake up. This is a regression of commit a13a636c which attempted to fix MDEV-11802 by ensuring that srv_purge_wakeup() will actually wait until all purge threads wake up. Due to the purge_sys->latch, the threads might never wake up, because some purge threads could end up waiting for purge_sys->latch in trx_undo_prev_version_build() while holding dict_operation_lock in shared mode. This in turn would block any DDL operations, the InnoDB master thread, and InnoDB shutdown.
-
Sergei Petrunia authored
Down-scale the test by a factor of 2.
-
Sergei Petrunia authored
Backport the fix from the upstream and add our testcase. Backported cset: commit 997a979bf5e2f75ab88781d9d3fd22dddc1fc21f Author: Manuel Ung <mung@fb.com> Date: Thu Feb 15 08:38:12 2018 -0800 Fix crashes in autoincrement code paths Summary: There are two issues related to autoincrement that can lead to crashes: 1. The max value for double/float type for autoincrement was not implemented in MyRocks, and can lead to assertions. The fix is to add them in. 2. If we try to set auto_increment via alter table on a table without an auto_increment column defined, we segfault because there is no index from which to read the last value. The fix is to perform a check to see if autoincrement exists before reading from index (similar to code ha_rocksdb::open). Fixes https://github.com/facebook/mysql-5.6/issues/792 Closes https://github.com/facebook/mysql-5.6/pull/794 Differential Revision: D6995096 Pulled By: lth fbshipit-source-id: 1130ce1
-
- 12 May, 2018 4 commits
-
-
Galina Shalygina authored
failure upon SELECT with impossible condition The problem appears because of a wrong implementation of the Item_func_in::build_clone() method. It didn't clone 'array' and 'cmp_fields' fields for the cloned IN predicate and this could cause crashes. The Item_func_in::fix_length_and_dec() method was refactored and a new method named Item_func_in::create_array() was created. It allowed to create 'array' for cloned IN predicates in a proper way.
-
Galina Shalygina authored
work in the IN subqueries The pushdown into the materialized derived table/view wasn't done because optimize() for the derived was called before any conditions that can be pushed down were extracted. So optimize() in convert_join_subqueries_to_semijoins() method is called too early and is unnecessary. The second optimize() call in mysql_handle_single_derived() is enough.
-
Marko Mäkelä authored
In InnoDB, CREATE TEMPORARY TABLE does not allow FULLTEXT INDEX. Replace a condition with a debug assertion, and add a test.
-
Marko Mäkelä authored
-
- 11 May, 2018 4 commits
-
-
Marko Mäkelä authored
-
Sachin Agarwal authored
Problem: Fix for Bug #21348684 (#Rb9581) introduced a conditional debug execute 'buf_pool_resize_chunk_null', which causes new chunks memory for 2nd buffer pool instance is freed. Buffer pool resize function removes all old chunks entry from 'buf_chunk_map_reg' and add new chunks entry into it. But when 'buf_pool_resize_chunk_null' is set true, 2nd buffer pool instance's chunk entries are not added into 'buf_chunk_map_reg'. When purge thread tries to access that buffer chunk, it leads to debug assertion. Fix: Added old chunk entries into 'buf_chunk_map_reg' for 2nd buffer pool instance when 'buf_pool_resize_chunk_null' debug condition is set to true. Reviewed by: Jimmy <Jimmy.Yang@oracle.com> RB: 18664
-
Aakanksha Verma authored
PROBLEM Issue found during ntest run is a regression of Bug #27141613. The issue is basically when index is being freed due to an error during its creation,when the index isn't added to dictionary cache its field columns are not set, the derefrencing of null col pointer during the clean of index from the virtual column's leads to a crash. NOTE: Also test i_innodb.virtual_debug was failing on 32k page size and above for the newly added scenario. Fixed that. FIX Added a check that if only the index is cached , the virtual index freeing from the virtual cols index list is performed. Reviewed by: Satya Bodapati<satya.bodapati@oracle.com> RB: 18670
-
Aakanksha Verma authored
PROBLEM ======= When add of virtual index fails with DB_TOO_BIG_RECORD , the virtual index being freed isn't removed from the list of indexes a virtual column(which is part of the index). This while the undo log is read could fetch a wrong value during rollback and cause the assertion reported in the bug particularly. FIX === Added a function that is called when the virtual index being freed would allow the index be removed from the index list of virtual column which was a field of that index. Reviwed By: Jimmy Yang<Jimmy.Yang@oracle.com> RB: 18528
-