- 28 Aug, 2019 2 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Some bugs are detected only after a table definition has been evicted and then reloaded to the InnoDB data dictionary cache. For debug builds, introduce the settable Boolean configuration parameter innodb_evict_tables_on_commit_debug that can be set to request InnoDB to attempt to evict table definitions from the data dictionary cache whenever a transaction is committed. This has been tested on 10.3 and 10.4 with the following: ./mysql-test-run.pl --mysqld=--loose-innodb-evict-tables-on-commit-debug You can also use the following: SET GLOBAL innodb_evict_tables_on_commit_debug=ON; SET GLOBAL innodb_evict_tables_on_commit_debug=OFF; The parameter affects the commit (or rollback or abort) of transactions that have modified persistent InnoDB tables.
-
- 27 Aug, 2019 3 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Revert part of fa2a74e0. trx_reference(): Remove, and merge the relevant part to the only caller trx_rw_is_active(). If the statements trx = NULL; were ever executed, the function would have dereferenced a NULL pointer and crashed in trx_mutex_exit(trx). Hence, those statements must have been unreachable, and they can be replaced with debug assertions. trx_rw_is_active(): Avoid unnecessary acquisition and release of trx->mutex when do_ref_count=false. lock_trx_release_locks(): Do not reset trx->id=0. Had the statement been necessary, we would have experienced crashes in trx_reference().
-
Sujatha authored
Cherry picking: Bug#25135304: RBR: WRONG FIELD LENGTH IN ERROR MESSAGE commit 47bd3f7cf3c8518f62b1580ec65af2ba7ac13b95 Description: ============ In row based replication, when replicating from a table with a field with character set set to UTF8mb3 to the same table with the same field set to character set UTF8mb4 I get a confusing error message: For VARCHAR: VARCHAR(1) 'utf8mb3' to VARCHAR(1) 'utf8mb4' "Column 0 of table 'test.t1' cannot be converted from type 'varchar(3)' to type 'varchar(1)'" Similar issue with CHAR type as well. Issue with respect to BLOB types: For BLOB: LONGBLOB to TINYBLOB - Error message displays incorrect blob type. "Column 0 of table 'test.t1' cannot be converted from type 'tinyblob' to type 'tinyblob'" For BINARY to BINARY - Error message displays incorrect type for master side field. "Column 0 of table 'test.t' cannot be converted from type 'char(1)' to type 'binary(10)'" Similar issue exists for VARBINARY type. It is displayed as 'VARCHAR'. Analysis: ========= In Row based replication charset information is not sent as part of metadata from master to slave. For VARCHAR field its character length is converted into equivalent octets/bytes and stored internally. At the time of displaying the data to user it is converted back to original character length. For example: VARCHAR(2)- utf8mb3 is stored as:2*3 = VARCHAR(6) At the time of displaying it to user VARCHAR(6)- charset utf8mb3:6/3= VARCHAR(2). At present the internally converted octect length is sent from master to slave with out providing the charset information. On slave side if the type conversion fails 'show_sql_type' function is used to get the type specific information from metadata. Since there is no charset information is available the filed type is displayed as VARCHAR(6). This results in confused error message. For CHAR fields CHAR(1)- utf8mb3 - CHAR(3) CHAR(1)- utf8mb4 - CHAR(4) 'show_sql_type' function which retrieves type information from metadata uses (bytes/local charset length) to get actual character length. If slave's chaset is 'utf8mb4' then CHAR(3/4)-->CHAR(0) CHAR(4/4)-->CHAR(1). This results in confused error message. Analysis for BLOB type issue: BLOB's length is represented in two forms. 1. Actual length i.e (length < 256) type= MYSQL_TYPE_TINY_BLOB; (length < 65536) type= MYSQL_TYPE_BLOB; ... 2. packlength - The number of bytes used to represent the length of the blob 1- tinyblob 2- blob ... In row based replication only the packlength is written in the binary log. On the slave side this packlength is interpreted as actual length of the blob. Hence the length is always < 256 and the type is displayed as tiny blob. Analysis for BINARY to BINARY type issue: The character set information is needed to identify a filed's type as char or binary. Since master side character set information is not available on the slave side both binary and char fields are displayed as char. Fix: === For CHAR and VARCHAR fields display their length in octets for both source and target fields. For target field display the charset information if it is relevant. For blob type changed the code to use the packlength and display appropriate blob type in error message. For binary and varbinary fields use the slave side character set as reference to map them to binary or varbinary fields.
-
- 26 Aug, 2019 2 commits
-
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
Depending on version, "handle.exe -?" can output either "Handle v4.0" or "Nthandle v4.1"
-
- 21 Aug, 2019 8 commits
-
-
Aleksey Midenkov authored
Current easy fix is not possible, because SELECT clones ha_partition and then closes the clone which leads to unclosed transaction in partitions we forcely prune out. We cound solve this by closing these partitions (and release from transaction) in change_partitions_to_open() at versioning conditions stage, but this is problematic because table lock is acquired for each partition at open stage and therefore must be released when we close partition handler in change_partitions_to_open(). More details in MDEV-20376. This should change after MDEV-20250 where mechanism of opening partitions will be improved. This reverts commit cdbac54d.
-
Marko Mäkelä authored
-
Aleksey Midenkov authored
Fix debug build failing with error: extended initializer lists only available with -std=c++11 or -std=gnu++11
-
Marko Mäkelä authored
ha_innobase::open(): Always ignore problems with FOREIGN KEY constraints (pass DICT_ERR_IGNORE_FK_NOKEY), no matter whether foreign_key_checks is enabled. Instead, we must report errors when enforcing the FOREIGN KEY constraints. As a result of ignoring these errors, the tables will be loaded with dict_foreign_t objects whose foreign_index or referenced_index will be NULL. Also, pass DICT_ERR_IGNORE_FK_NOKEY instead of DICT_ERR_IGNORE_NONE to dict_table_open_on_id_low() in many other cases. Notably, on CREATE TABLE and ALTER TABLE, we will keep validating the FOREIGN KEY constraints as before. dict_table_open_on_name(): If no other flags than DICT_ERR_IGNORE_FK_NOKEY are set, refuse access to unreadable tables. Some encryption tests rely on this code path. For the DML code path, we used to have the problem that when one of the indexes was missing in dict_foreign_t, we would ignore the FOREIGN KEY constraint altogether. The following changes address that. row_ins_check_foreign_constraints(): Add the parameter pk. For the primary key, consider also foreign key constraints for which foreign->foreign_index=NULL (no underlying index is available). row_ins_check_foreign_constraint(): Report errors also for !check_ref. Remove a redundant check for srv_read_only_mode. row_ins_foreign_report_add_err(): Tolerate foreign->foreign_index=NULL.
-
Marko Mäkelä authored
fkerr_t: Errors for the foreign key checks. Replaces ulint, which used #define that looked like dberr_t literals. wsrep_dict_foreign_find_index(): Remove. Use dict_foreign_find_index() instead, with default parameters. dict_foreign_push_index_error(): Do not add redundant quotes around quoted table names.
-
Marko Mäkelä authored
-
Anel Husakovic authored
-
Jan Lindström authored
Add wait conditions and compare cardinality etc information between nodes and print something only if they differ.
-
- 20 Aug, 2019 11 commits
-
-
Sergei Golubchik authored
heap_scan() makes info->next_block to be either an integer number of share->block.records_in_block's or the total number of records in the table. So when this total number or records changes, info->next_block needs to be recalculated to take it into account. This is a different fix for "Fixes a problem with heap when scanning and insert rows at the same time"
-
Sergei Golubchik authored
This reverts commit 262927a9.
-
Marko Mäkelä authored
Let us invoke wait_all_purged.inc right before the workload. Starting with MDEV-12288 in MariaDB Server 10.3, also INSERT generates purge workload. If we do not ensure that purge has run to completion, the results on 10.3 and later could be nondeterministic.
-
Aleksey Midenkov authored
* Ensure no background purge is done; * Better result showing actual number in case of failure; * Higher and more safe difference threshold.
-
Monty authored
-
Monty authored
This causes failures in versioning.update-big in 10.3 and above
-
Marko Mäkelä authored
main.selectivity_innodb: Re-record the result. MDEV-18863: Make wsrep_sst_mariabackup use Bash due to the += syntax.
-
Marko Mäkelä authored
For MDEV-15955, the fix in create_tmp_field_from_item() would cause a compilation error. After a discussion with Alexander Barkov, the fix was omitted and only the test case was kept. In 10.3 and later, MDEV-15955 is fixed properly by overriding create_tmp_field() in Item_func_user_var.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Alexander Barkov authored
-
- 19 Aug, 2019 10 commits
-
-
Julius Goryavsky authored
The execution of mtr in the Windows environment fails due to the fact that the new code from MDEV-18565 does not take into account the need to add the ".exe" extension to the names of executable files when searching for pre-requisites that are needed to run SST scripts (especially when using mariabackup) and when searching paths to some other Galera utilities. This patch fixes this flaw. Also adding paths to the PATH environment variable is now done with the correct delimiter character.
-
Julius Goryavsky authored
The execution of mtr in the Windows environment fails due to the fact that the new code from MDEV-18565 does not take into account the need to add the ".exe" extension to the names of executable files when searching for pre-requisites that are needed to run SST scripts (especially when using mariabackup) and when searching paths to some other Galera utilities. This patch fixes this flaw. Also adding paths to the PATH environment variable is now done with the correct delimiter character.
-
Julius Goryavsky authored
Some users and some scripts (for example, mysqld_multi.sh) use special option groups with names like [mysqld1], [mysqld2], ..., [mysqldN]. But SST scripts can't currently fully support these option groups. The only option group-related value it gets from the server is --defaults-group-suffix, if that option was set for mysqld when the server was started. However, the SST scripts does not get told by the server to read these option groups, so this means that the SST script will fail to read options like innodb-data-home-dir when it is in a option group like [mysqld1]...[mysqldN]. Moreover, SST scripts ignore many parameters that can be passed to them explicitly and cannot transfer them further, for example, to the input of mariabackup utility. Ideally, we want to transfer all the parameters of the original mysqld call to utilities such as mariabackup, however the SST script does not receive these parameters from the server and therefore cannot transfer them to mariabackup. To correct these shortcomings, we need to transfer to the scripts all of the parameters of the original mysqld call, and in the SST scripts themselves provide for the transfer all of these parameters to utilities such as mariabackup. To prevent these parameters from mixing with the script's own parameters, they should be transferred to SST script after the special option "--mysqld-args", followed by the string argument with the original parameters, as it received by the mysqld call at the time of launch (further all these parameters will be passed to mariabackup, for example). In addition, the SST scripts themselves must be refined so that they can read the parameters from the user-selected group, not just from the global mysqld configuration group. And also so that they can receive the parameters (which important for their work) as command-line arguments.
-
Julius Goryavsky authored
Some users and some scripts (for example, mysqld_multi.sh) use special option groups with names like [mysqld1], [mysqld2], ..., [mysqldN]. But SST scripts can't currently fully support these option groups. The only option group-related value it gets from the server is --defaults-group-suffix, if that option was set for mysqld when the server was started. However, the SST scripts does not get told by the server to read these option groups, so this means that the SST script will fail to read options like innodb-data-home-dir when it is in a option group like [mysqld1]...[mysqldN]. Moreover, SST scripts ignore many parameters that can be passed to them explicitly and cannot transfer them further, for example, to the input of mariabackup utility. Ideally, we want to transfer all the parameters of the original mysqld call to utilities such as mariabackup, however the SST script does not receive these parameters from the server and therefore cannot transfer them to mariabackup. To correct these shortcomings, we need to transfer to the scripts all of the parameters of the original mysqld call, and in the SST scripts themselves provide for the transfer all of these parameters to utilities such as mariabackup. To prevent these parameters from mixing with the script's own parameters, they should be transferred to SST script after the special option "--mysqld-args", followed by the string argument with the original parameters, as it received by the mysqld call at the time of launch (further all these parameters will be passed to mariabackup, for example). In addition, the SST scripts themselves must be refined so that they can read the parameters from the user-selected group, not just from the global mysqld configuration group. And also so that they can receive the parameters (which important for their work) as command-line arguments.
-
Julius Goryavsky authored
Some users and some scripts (for example, mysqld_multi.sh) use special option groups with names like [mysqld1], [mysqld2], ..., [mysqldN]. But SST scripts can't currently fully support these option groups. The only option group-related value it gets from the server is --defaults-group-suffix, if that option was set for mysqld when the server was started. However, the SST scripts does not get told by the server to read these option groups, so this means that the SST script will fail to read options like innodb-data-home-dir when it is in a option group like [mysqld1]...[mysqldN]. Moreover, SST scripts ignore many parameters that can be passed to them explicitly and cannot transfer them further, for example, to the input of mariabackup utility. Ideally, we want to transfer all the parameters of the original mysqld call to utilities such as mariabackup, however the SST script does not receive these parameters from the server and therefore cannot transfer them to mariabackup. To correct these shortcomings, we need to transfer to the scripts all of the parameters of the original mysqld call, and in the SST scripts themselves provide for the transfer all of these parameters to utilities such as mariabackup. To prevent these parameters from mixing with the script's own parameters, they should be transferred to SST script after the special option "--mysqld-args", followed by the string argument with the original parameters, as it received by the mysqld call at the time of launch (further all these parameters will be passed to mariabackup, for example). In addition, the SST scripts themselves must be refined so that they can read the parameters from the user-selected group, not just from the global mysqld configuration group. And also so that they can receive the parameters (which important for their work) as command-line arguments.
-
Igor Babaev authored
This patch corrects the fix of the patch for mdev-19421 that resolved the problem of parsing some embedded join expressions such as t1 join t2 left join t3 on t2.a=t3.a on t1.a=t2.a. Yet the patch contained a bug that prevented proper context analysis of the queries where such expressions were used together with comma separated table references in from clauses.
-
Marko Mäkelä authored
MemorySanitizer is a compile-time instrumentation layer in clang and GCC. Together with AddressSanitizer mostly makes the run-time instrumentation of Valgrind redundant. It is a little more tricky to set up, because running with uninstrumented libraries will lead into false positives. You will need an instrumented libc++, and you should use -stdlib=libc++ instead of the default libstdc++. To build the instrumented library, you can refer to https://github.com/google/sanitizers/wiki/MemorySanitizerLibcxxHowTo or you can adapt these steps that worked for me, for clang-8 version 8.0.1: cd /mariadb sudo apt source libc++-8-dev cd llvm-toolchain-8-8.0.1 mkdir libc++msan; cd libc++msan cmake ../libcxx -DCMAKE_BUILD_TYPE=Release -DLLVM_USE_SANITIZER=Memory \ -DCMAKE_C_COMPILER=clang-8 -DCMAKE_CXX_COMPILER=clang++-8 Then, in your MariaDB build directory, you have to compile with libc++ and bundled libraries, such as WITH_SSL=bundled, WITH_ZLIB=bundled. For uninstrumented system libraries, you will get false positives for uninitialized values. Like this: cmake -DWITH_MSAN=ON -DWITH_SSL=bundled -DWITH_ZLIB=bundled \ -DCMAKE_CXX_FLAGS='-stdlib=libc++' .. Note: you should also add -O2 to the compiler options, or you may get crashes due to stack overflow. Finally, to run tests, you must replace libc++ with the instrumented one: LD_LIBRARY_PATH=/mariadb/llvm-toolchain-8-8.0.1/libc++msan/lib \ MSAN_OPTIONS=abort_on_error=1 \ ./mtr --big-test --parallel=auto --force --retry=0 Failure to do so will report numerous false positives related to operations on std::string and the like. This is work in progress. Some issues will still have to be fixed for WITH_MSAN to be usable. See MDEV-20377 for details.
-
Monty authored
Bug was that storage_engine::info() was called with not opened table in ha_partition::info(). Fixed by ensuring that we are using an opened table.
-
Marko Mäkelä authored
-
Aleksey Midenkov authored
* Replace LINT_INIT for non-struct types with ctor initializers; * Check BUILD_DEPS list is not empty so REMOVE_DUPLICATES won't throw error.
-
- 18 Aug, 2019 1 commit
-
-
Monty authored
-
- 16 Aug, 2019 3 commits
-
-
Alexander Barkov authored
-
Kentoku SHIBA authored
-
Thirunarayanan Balathandayuthapani authored
======== During ibd file creation, InnoDB flushes the page0 without crypt information. During recovery, InnoDB encounters encrypted page read before initialising the crypt data of the tablespace. So it leads t corruption of page and doesn't allow innodb to start. Solution: ========= Write crypt_data information in page0 while creating .ibd file creation. During recovery, crypt_data will be initialised while processing MLOG_FILE_NAME redo log record.
-