- 11 Feb, 2019 1 commit
-
-
Marko Mäkelä authored
When importing a tablespace, we must initialize dummy DEFAULT NULL values for any instantly added columns in order to avoid a debug assertion failure when PageConverter::update_records() invokes rec_get_offsets(). Finally, when the operation completes, we must evict and reload the table definition, so that the correct default values for instantly added columns will be loaded. ha_innobase::discard_or_import_tablespace(): On successful IMPORT TABLESPACE, evict and reload the table definition, so that btr_cur_instant_init() will load the correct metadata. PageConverter::update_index_page(): Fill in dummy DEFAULT NULL values for instantly added columns. These will be replaced upon the completion of the operation by evicting and reloading the metadata. row_discard_tablespace(): Invoke dict_table_t::remove_instant(). After DISCARD TABLESPACE, the table is no longer in "instant ALTER" format, because there is no data file attached.
-
- 06 Feb, 2019 5 commits
-
-
Monty authored
- Fixes building with galera and tokudb - Added support for --without-wsrep BUILD script option
-
Monty authored
- Backport from 10.4
-
Monty authored
-
Monty authored
-
Daniel Black authored
According to close(2) "Retrying the close() after a failure return is the wrong thing to do" Corrects 5c81cb88 in MDEV-15635
-
- 04 Feb, 2019 2 commits
-
-
Eugene Kosov authored
Make ALGORITHM=INSTANT explicit.
-
Marko Mäkelä authored
-
- 02 Feb, 2019 1 commit
-
-
Vladislav Vaintroub authored
Store original charset during client authentication, and restore it for COM_RESET_CONNECTION
-
- 01 Feb, 2019 4 commits
-
-
Alexey Botchkov authored
No need to lowercase table names on case-sensitive file systems, as the cache won't contain the 'lowercased' table anyway. And it prevents the UPPERCASE.frm from being deleted.
-
Thirunarayanan Balathandayuthapani authored
Problem: ======= Mariabackup incremental prepare creates new tablespace when it encounter new tablespace. It sets the intial size as FIL_IBD_FILE_INITIAL_SIZE (4). But while applying redo log, it tries to access 5th page and then it leads to out of tablespace error. Fix: === While parsing the redo log record, track FSP_SIZE in recv_spaces for the respective space id. Assign the recv_size for the tablespace when it is loaded. Extend the tablespace depends on recv_size while applying the redo log record.
-
Vladislav Vaintroub authored
-
Thirunarayanan Balathandayuthapani authored
- Added retry logic if validation of first page fails with checksum mismatch.
-
- 31 Jan, 2019 10 commits
-
-
Vladislav Vaintroub authored
Do not try to write ER_SHUTDOWN error message to socket, when it is forcefully closed by the shutdown. This will avoid the race condition (attempt to write to closed socket, if connection shuts down by itself).
-
Jan Lindström authored
MDEV-18426: Most of the mtr tests in the galera_3nodes suite fail
-
Kentoku authored
-
Kentoku authored
-
Kentoku authored
add simplified slave_trx_isolation.test
-
Kentoku authored
add simplified quick_mode.test
-
Kentoku authored
Add a system variable spider_slave_trx_isolation. - spider_slave_trx_isolation The transaction isolation level when Spider table is used by slave SQL thread. -1 : OFF 0 : READ UNCOMMITTED 1 : READ COMMITTED 2 : REPEATABLE READ 3 : SERIALIZABLE The default value is -1 Miscellaneous Spider typos
-
Kentoku authored
Change default value of the followings quick_mode 0 -> 3 quick_page_size 100 -> 1024 Add the following parameter for limiting result page size by byte - quick_page_byte(qpb) Number of bytes in a page when acquisition one by one. When quick_mode is 1 or 2, Spider stores at least 1 record even if quick_page_byte is smaller than 1 record. When quick_mode is 3, quick_page_byte is used for judging using temporary table. That is given to priority when server parameter spider_quick_page_byte is set. The default value is 10485760 Fix "out of sync" issue at using quick_mode = 1 or 2
-
Kentoku authored
The fields of the temporary table were not created in create_tmp_table function. Because item->const_item() was true. But the temporary tables that is created by Spider are always used all columns. So Spider should call create_tmp_table function with TMP_TABLE_ALL_COLUMNS flag.
-
Kentoku authored
-
- 30 Jan, 2019 2 commits
-
-
Julius Goryavsky authored
Most of the mtr tests in the galera_3nodes suite fail for a variety of reasons with a variety of errors. Some tests simply need to add the missing "connection" lines to the result files, but many of them fail due to substantial errors that require reworking test files. This patch adds the missing "connection" lines to the result files and fixes several substantial flaws in the galera_3nodes suite tests and in the mtr framework service files, adapting the tests from galera_3nodes for the current version of MariaDB. https://jira.mariadb.org/browse/MDEV-18426
-
Thirunarayanan Balathandayuthapani authored
Analysis: ======== Increasing the length of the indexed varchar column is not an instant operation for innodb. Fix: === - Introduce the new handler flag 'Alter_inplace_info::ALTER_COLUMN_INDEX_LENGTH' to indicate the index length differs due to change of column length changes. - InnoDB makes the ALTER_COLUMN_INDEX_LENGTH flag as instant operation. This is a port of Mysql fix. commit 913071c0b16cc03e703308250d795bc381627e37 Author: Nisha Gopalakrishnan <nisha.gopalakrishnan@oracle.com> Date: Wed May 30 14:54:46 2018 +0530 BUG#26848813: INDEXED COLUMN CAN'T BE CHANGED FROM VARCHAR(15) TO VARCHAR(40) INSTANTANEOUSLY
-
- 28 Jan, 2019 2 commits
-
-
Marko Mäkelä authored
The parameters innodb_file_format and innodb_large_prefix were overridden in the Debian-distributed configuration files, because the default values of these parameters between MariaDB 5.5 and MariaDB 10.2 did not make any sense. To allow a more seamless upgrade from MariaDB 10.1 to later versions, let InnoDB recognize the parameters innodb_file_format and innodb_large_prefix and issue deprecation warnings for them if they are specified. A deprecation period of only one major release (one year between the MariaDB 10.2 and 10.3 releases) is insufficient for these widely used parameters.
-
Jan Lindström authored
MDEV-15740 Fixes to Galera transaction recovery
-
- 27 Jan, 2019 3 commits
-
-
Teemu Ollakka authored
-
Teemu Ollakka authored
-
Teemu Ollakka authored
If the TC log did not provide list of XIDs to recover, the commit by XID was skipped during wsrep recovery if binlog emulation was on. However, with wsrep we want to commit every prepared transaction with assigned wsrep XID since the transaction has already been committed in the cluster. Added a special condition to always proceed to commit by XID in xarecover_handlerton() if binlog is off and the recovered transaction has wsrep XID.
-
- 25 Jan, 2019 10 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
and gitignore myrocks_hotbackup (as it's now generated) Closes #1081
-
Honza Horak authored
Closes #1080
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
It's too late. Might work or not (and on buster it doesn't).
-
Sergei Golubchik authored
-
Eugene Kosov authored
MDEV-18057 Assertion `(node->state == 5) || (node->state == 6)' failed in row_upd_sec_step upon DELETE after UPDATE failed due to FK violation The idea of the fix: reset state from previous query. row_upd_clust_step(): reset cached index before updating a clustered index Closes #1133
-
Marko Mäkelä authored
-
Teemu Ollakka authored
Clear wsrep XID in innobase_rollback_by_xid() for recovered wsrep transaction in order to avoid resetting XID storage when rolling back wsrep transaction during recovery. Sort wsrep XIDs read from storage engine in ascending order and erify that the range is continuous during crash recovery. If binlog is off, commit all recovered transactions for continuous seqno range. This is safe because all transactions with wsrep XID have been certified and must be committed in the cluster. On the other hand if binlog is on, respect binlog as a transaction coordinator in order to avoid missing transactions in binlog that have been committed into storage engine .
-