- 06 Sep, 2019 2 commits
-
-
Marko Mäkelä authored
With cmake -DWITH_INNODB_AHI=OFF -DCMAKE_BUILD_TYPE=RelWithDebInfo the variable 'i' in fseg_free_extent() was declared but not used.
-
Vlad Lesin authored
The test fails because it reuses mysqltest perl code to copy directory tree, and this code contains Windows-specific piece which outputs some diagnostic information. The patch introduces new parameter for that Windows-specific perl code to have the ability to suppress diagnostic output on the corresponding mysqltest perl module initialization.
-
- 05 Sep, 2019 1 commit
-
-
Sergei Petrunia authored
Make the test stable: after DROP TABLE, make sure the compaction is run and finishes. If we don't do this, the post-drop compaction may run during the next testcase. It will cause a record from the next testcase to be compacted away when the test logic doesn't expect it and the test will fail
-
- 04 Sep, 2019 10 commits
-
-
Sergei Golubchik authored
this is not ideal and needs to be fixed eventually, but it's consistent over all forms of INSERT.
-
Sergei Golubchik authored
MDEV-20403 Assertion `0' or Assertion `btr_validate_index(index, 0)' failed in row_upd_sec_index_entry or error code 126: Index is corrupted upon UPDATE with TIMESTAMP..ON UPDATE remove a special treatment of a bare DEFAULT keyword that made it behave inconsistently and differently from DEFAULT(column). Now all forms of the explicit assignment of a default column value behave identically, and all count as an explicitly assigned value (for the purpose of ON UPDATE NOW). followup for c7c481f4
-
Sachin authored
Fix the test case.
-
Monty authored
MDEV-5817 query cache bug (returning inconsistent/old result set) with aria table parallel inserts, row format = page The problem is that for transactional aria tables (row_type=PAGE and transactional=1), maria_lock_database() didn't flush the state or the query cache. Not flushing the state is correct for transactional tables as this is done by checkpoint, but not flushing the query cache was wrong and could cause concurrent SELECT queries to not be deleted from the cache. Fixed by introducing a flush of the query cache as part of commit, if the table has changed. t for transactional aria tables (row_type=PAGE and transactional=1), maria_lock_table() didn't flush their state or the query cache.
-
Marko Mäkelä authored
Backport the applicable part of Sergey Vojtovich's commit 0ca2ea1a from MariaDB Server 10.3. trx reference counter was updated under mutex and read without any protection. This is both slow and unsafe. Use atomic operations for reference counter accesses.
-
Marko Mäkelä authored
MySQL 5.7.9 (and MariaDB 10.2.2) introduced a race condition between InnoDB transaction commit and the conversion of implicit locks into explicit ones. The assertion failure can be triggered with a test that runs 3 concurrent single-statement transactions in a loop on a simple table: CREATE TABLE t (a INT PRIMARY KEY) ENGINE=InnoDB; thread1: INSERT INTO t SET a=1; thread2: DELETE FROM t; thread3: SELECT * FROM t FOR UPDATE; -- or DELETE FROM t; The failure scenarios are like the following: (1) The INSERT statement is being committed, waiting for lock_sys->mutex. (2) At the time of the failure, both the DELETE and SELECT transactions are active but have not logged any changes yet. (3) The transaction where the !other_lock assertion fails started lock_rec_convert_impl_to_expl(). (4) After this point, the commit of the INSERT removed the transaction from trx_sys->rw_trx_set, in trx_erase_lists(). (5) The other transaction consulted trx_sys->rw_trx_set and determined that there is no implicit lock. Hence, it grabbed the lock. (6) The !other_lock assertion fails in lock_rec_add_to_queue() for the lock_rec_convert_impl_to_expl(), because the lock was 'stolen'. This assertion failure looks genuine, because the INSERT transaction is still active (trx->state=TRX_STATE_ACTIVE). The problematic step (4) was introduced in mysql/mysql-server@e27e0e0bb75b4d35e87059816f1cc370c09890ad which fixed something related to MVCC (covered by the test innodb.innodb-read-view). Basically, it reintroduced an error that had been mentioned in an earlier commit mysql/mysql-server@a17be6963fc0d9210fa0642d3985b7219cdaf0c5: "The active transaction was removed from trx_sys->rw_trx_set prematurely." Our fix goes along the following lines: (a) Implicit locks will released by assigning trx->state=TRX_STATE_COMMITTED_IN_MEMORY as the first step. This transition will no longer be protected by lock_sys_t::mutex, only by trx->mutex. This idea is by Sergey Vojtovich. (b) We detach the transaction from trx_sys before starting to release explicit locks. (c) All callers of trx_rw_is_active() and trx_rw_is_active_low() must recheck trx->state after acquiring trx->mutex. (d) Before releasing any explicit locks, we will ensure that any activity by other threads to convert implicit locks into explicit will have ceased, by checking !trx_is_referenced(trx). There was a glitch in this check when it was part of lock_trx_release_locks(); at the end we would release trx->mutex and acquire lock_sys->mutex and trx->mutex, and fail to recheck (trx_is_referenced() is protected by trx_t::mutex). (e) Explicit locks can be released in batches (LOCK_RELEASE_INTERVAL=1000) just like we did before. trx_t::state: Document that the transition to COMMITTED is only protected by trx_t::mutex, no longer by lock_sys_t::mutex. trx_rw_is_active_low(), trx_rw_is_active(): Document that the transaction state should be rechecked after acquiring trx_t::mutex. trx_t::commit_state(): New function to change a transaction to committed state, to release implicit locks. trx_t::release_locks(): New function to release the explicit locks after commit_state(). lock_trx_release_locks(): Move much of the logic to the caller (which must invoke trx_t::commit_state() and trx_t::release_locks() as needed), and assert that the transaction will have locks. trx_get_trx_by_xid(): Make the parameter a pointer to const. lock_rec_other_trx_holds_expl(): Recheck trx->state after acquiring trx->mutex, and avoid a redundant lookup of the transaction. lock_rec_queue_validate(): Recheck impl_trx->state while holding impl_trx->mutex. row_vers_impl_x_locked(), row_vers_impl_x_locked_low(): Document that the transaction state must be rechecked after trx_mutex_enter(). trx_free_prepared(): Adjust for the changes to lock_trx_release_locks().
-
Marko Mäkelä authored
This is a backport of 900b0790 from MariaDB Server 10.3.
-
Marko Mäkelä authored
We were missing a test that would exercise trx_free_prepared() with innodb_fast_shutdown=0. Add a test. Note: if shutdown hangs due to the XA PREPARE transactions, in MariaDB 10.2 the test would unfortunately pass, but take 2*60 seconds longer, because of two shutdown_server statements timing out after 60 seconds. Starting with MariaDB 10.3, the hung server would be killed with SIGABRT, and the test could fail thanks to a backtrace message.
-
Marko Mäkelä authored
-
Jan Lindström authored
-
- 03 Sep, 2019 6 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
MDEV-20403 Assertion `0' or Assertion `btr_validate_index(index, 0)' failed in row_upd_sec_index_entry or error code 126: Index is corrupted upon UPDATE with TIMESTAMP..ON UPDATE Three issues here: * ON UPDATE DEFAULT NOW columns were updated after generated columns were computed - this broke indexed virtual columns * ON UPDATE DEFAULT NOW columns were updated after BEFORE triggers, so triggers didn't see the correct NEW value * in case of a multi-update generated columns were also updated after BEFORE triggers
-
Sergei Golubchik authored
on UPDATE, compare_record() was comparing all columns that are marked for writing. But generated columns that are written to the table are always deterministic and cannot change unless normal non-generated columns were changed. So it's enough to compare only non-generated columns that were explicitly assigned values in the SET clause.
-
Sergei Golubchik authored
* remove one level of virtual functions * remove redundant checks * remove an if() as the value is always known at compilation time don't pretend that "DEFAULT expr" and "ON UPDATE DEFAULT NOW" are "basically the same thing"
-
Alexander Barkov authored
Part2: MDEV-18156 Assertion `0' failed or `btr_validate_index(index, 0, false)' in row_upd_sec_index_entry or error code 126: Index is corrupted upon DELETE with PAD_CHAR_TO_FULL_LENGTH This patch allows the server to open old tables that have "bad" generated columns (i.e. indexed virtual generated columns, persistent generated columns) that depend on sql_mode, for general things like SELECT, INSERT, DROP, etc. Warning are issued in such cases. Only these commands are now disallowed and return an error: - CREATE TABLE introducing a "bad" generated column - ALTER TABLE introducing a "bad" generated column - CREATE INDEX introdicing a "bad" generated column (i.e. adding an index on a virtual generated column that depends on sql_mode). Note, these commands are allowed: - ALTER TABLE removing a "bad" generate column - ALTER TABLE removing an index from a "bad" virtual generated column - DROP INDEX removing an index from a "bad" virtual generated column but only if the table does not have any "bad" columns as a result.
-
Alexander Barkov authored
MDEV-18156 Assertion `0' failed or `btr_validate_index(index, 0, false)' in row_upd_sec_index_entry or error code 126: Index is corrupted upon DELETE with PAD_CHAR_TO_FULL_LENGTH This change takes into account a column's GENERATED ALWAYS AS expression dependcy on sql_mode's PAD_CHAR_TO_FULL_LENGTH and NO_UNSIGNED_SUBTRACTION flags. Indexed virtual columns as well as persistent generated columns are now not allowed to have such dependencies to avoid inconsistent data or index files on sql_mode changes. So an error is now returned in cases like this: CREATE OR REPLACE TABLE t1 ( a CHAR(5), v VARCHAR(5) AS (a) PERSISTENT -- CHAR->VARCHAR or CHAR->TEXT = ERROR ); Functions RPAD() and RTRIM() can now remove dependency on PAD_CHAR_TO_FULL_LENGTH. So this can be used instead: CREATE OR REPLACE TABLE t1 ( a CHAR(5), v VARCHAR(5) AS (RTRIM(a)) PERSISTENT ); Note, unlike CHAR->VARCHAR and CHAR->TEXT this still works, not RPAD(a) is needed: CREATE OR REPLACE TABLE t1 ( a CHAR(5), v CHAR(5) AS (a) PERSISTENT -- CHAR->CHAR is OK ); More sql_mode flags may affect values of generated columns. They will be addressed separately. See comments in sql_mode.h for implementation details.
-
- 02 Sep, 2019 1 commit
-
-
Monty authored
-
- 01 Sep, 2019 6 commits
-
-
Monty authored
This allows one to run the test suite even if any of the following options are changed: - character-set-server - collation-server - join-cache-level - log-basename - max-allowed-packet - optimizer-switch - query-cache-size and query-cache-type - skip-name-resolve - table-definition-cache - table-open-cache - Some innodb options etc Changes: - Don't print out the value of system variables as one can't depend on them to being constants. - Don't set global variables to 'default' as the default may not be the same as the test was started with if there was an additional option file. Instead save original value and reset it at end of test. - Test that depends on the latin1 character set should include default_charset.inc or set the character set to latin1 - Test that depends on the original optimizer switch, should include default_optimizer_switch.inc - Test that depends on the value of a specific system variable should set it in the test (like optimizer_use_condition_selectivity) - Split subselect3.test into subselect3.test and subselect3.inc to make it easier to set and reset system variables. - Added .opt files for test that required specfic options that could be changed by external configuration files. - Fixed result files in rockdsb & tokudb that had not been updated for a while.
-
Monty authored
-
Monty authored
This was done to be able to get any information why the embedded server doesn't start.
-
Monty authored
These was part of an incomplete old merge from MySQL 5.6
-
Monty authored
-
Sergei Golubchik authored
-
- 30 Aug, 2019 2 commits
-
-
Oleksandr Byelkin authored
-
Sergei Petrunia authored
Merge the changes to include/index_merge*inc from the upstream. The changes add this command in many places: +if ($engine_type == RocksDB) +{ + set global rocksdb_force_flush_memtable_now=1; +} also add it in one more place to make the test truly stable.
-
- 29 Aug, 2019 1 commit
-
-
Marko Mäkelä authored
-
- 28 Aug, 2019 5 commits
-
-
Marko Mäkelä authored
Changes of PAGE_MAX_TRX_ID must be redo-logged for correctness. That was fixed in the InnoDB Plugin for MySQL 5.1 already.
-
Marko Mäkelä authored
-
Julius Goryavsky authored
Improved handling of subdirectories in the xtrabackup-v2 SST scripts (similar to MDEV-18863) for more predictable test results (related to xtrabackup-v2 SST)
-
Oleksandr Byelkin authored
MDEV-16932: ASAN heap-use-after-free in my_charlen_utf8 / my_well_formed_char_length_utf8 on 2nd execution of SP with ALTER trying to add bad CHECK Make automatic name generation during execution (not prepare). Check result of memory allocation operation.
-
Eugene Kosov authored
-
- 27 Aug, 2019 5 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Revert part of fa2a74e0. trx_reference(): Remove, and merge the relevant part to the only caller trx_rw_is_active(). If the statements trx = NULL; were ever executed, the function would have dereferenced a NULL pointer and crashed in trx_mutex_exit(trx). Hence, those statements must have been unreachable, and they can be replaced with debug assertions. trx_rw_is_active(): Avoid unnecessary acquisition and release of trx->mutex when do_ref_count=false. lock_trx_release_locks(): Do not reset trx->id=0. Had the statement been necessary, we would have experienced crashes in trx_reference().
-
Sujatha authored
Cherry picking: Bug#25135304: RBR: WRONG FIELD LENGTH IN ERROR MESSAGE commit 47bd3f7cf3c8518f62b1580ec65af2ba7ac13b95 Description: ============ In row based replication, when replicating from a table with a field with character set set to UTF8mb3 to the same table with the same field set to character set UTF8mb4 I get a confusing error message: For VARCHAR: VARCHAR(1) 'utf8mb3' to VARCHAR(1) 'utf8mb4' "Column 0 of table 'test.t1' cannot be converted from type 'varchar(3)' to type 'varchar(1)'" Similar issue with CHAR type as well. Issue with respect to BLOB types: For BLOB: LONGBLOB to TINYBLOB - Error message displays incorrect blob type. "Column 0 of table 'test.t1' cannot be converted from type 'tinyblob' to type 'tinyblob'" For BINARY to BINARY - Error message displays incorrect type for master side field. "Column 0 of table 'test.t' cannot be converted from type 'char(1)' to type 'binary(10)'" Similar issue exists for VARBINARY type. It is displayed as 'VARCHAR'. Analysis: ========= In Row based replication charset information is not sent as part of metadata from master to slave. For VARCHAR field its character length is converted into equivalent octets/bytes and stored internally. At the time of displaying the data to user it is converted back to original character length. For example: VARCHAR(2)- utf8mb3 is stored as:2*3 = VARCHAR(6) At the time of displaying it to user VARCHAR(6)- charset utf8mb3:6/3= VARCHAR(2). At present the internally converted octect length is sent from master to slave with out providing the charset information. On slave side if the type conversion fails 'show_sql_type' function is used to get the type specific information from metadata. Since there is no charset information is available the filed type is displayed as VARCHAR(6). This results in confused error message. For CHAR fields CHAR(1)- utf8mb3 - CHAR(3) CHAR(1)- utf8mb4 - CHAR(4) 'show_sql_type' function which retrieves type information from metadata uses (bytes/local charset length) to get actual character length. If slave's chaset is 'utf8mb4' then CHAR(3/4)-->CHAR(0) CHAR(4/4)-->CHAR(1). This results in confused error message. Analysis for BLOB type issue: BLOB's length is represented in two forms. 1. Actual length i.e (length < 256) type= MYSQL_TYPE_TINY_BLOB; (length < 65536) type= MYSQL_TYPE_BLOB; ... 2. packlength - The number of bytes used to represent the length of the blob 1- tinyblob 2- blob ... In row based replication only the packlength is written in the binary log. On the slave side this packlength is interpreted as actual length of the blob. Hence the length is always < 256 and the type is displayed as tiny blob. Analysis for BINARY to BINARY type issue: The character set information is needed to identify a filed's type as char or binary. Since master side character set information is not available on the slave side both binary and char fields are displayed as char. Fix: === For CHAR and VARCHAR fields display their length in octets for both source and target fields. For target field display the charset information if it is relevant. For blob type changed the code to use the packlength and display appropriate blob type in error message. For binary and varbinary fields use the slave side character set as reference to map them to binary or varbinary fields.
-
Jan Lindström authored
-
Alexander Barkov authored
Also fixes: MDEV-20431 GREATEST(int_col,date_col) returns wrong results in a view
-
- 26 Aug, 2019 1 commit
-
-
Julius Goryavsky authored
After applying MDEV-18863, in some test configurations, SST may fails due to duplication of some parameters (in particular "--port") in the main part of the command line and after "--mysqld-args", as well as due to incorrect interpretation of the parameter "--port" passed after "--mysqld-args" when the SST script is invoked without explicitly specifying a port for SST. In addition, it is necessary to correctly handle spaces, quotation marks and special characters when copying original arguments from the argv[] array to a new command line (after "--mysqld-args"). This patch resolves these shortcomings.
-