- 21 Aug, 2023 2 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
buf_LRU_block_remove_hashed(): Remove a comment that had been added in mysql/mysql-server@aad1c7d0dd8a152ef6bb685356c68ad9978d686a and apparently referring to buf_LRU_invalidate_tablespace(), which was later replaced with buf_LRU_flush_or_remove_pages() and ultimately with buf_flush_remove_pages() and buf_flush_list_space(). All that code is covered by buf_pool.mutex. The note about releasing the hash_lock for the buf_pool.page_hash slice would actually apply to the last reference to hash_lock in buf_LRU_free_page(), for the case zip=false (retaining a ROW_FORMAT=COMPRESSED page while discarding the uncompressed one).
-
- 18 Aug, 2023 1 commit
-
-
Monty authored
MDEV-29693 ANALYZE TABLE still flushes table definition cache when engine-independent statistics is used This commits enables reloading of engine-independent statistics without flushing the table from table definition cache. This is achieved by allowing multiple version of the TABLE_STATISTICS_CB object and having independent pointers to it in TABLE and TABLE_SHARE. The TABLE_STATISTICS_CB object have reference pointers and are freed when no one is pointing to it anymore. TABLE's TABLE_STATISTICS_CB pointer is updated to use the TABLE_SHARE's pointer when read_statistics_for_tables() is called at the beginning of a query. Main changes: - read_statistics_for_table() will allocate an new TABLE_STATISTICS_CB object. - All get_stat_values() functions has a new parameter that tells where collected data should be stored. get_stat_values() are not using the table_field object anymore to store data. - All get_stat_values() functions returns 1 if they found any data in the statistics tables. Other things: - Fixed INSERT DELAYED to not read statistics tables. - Removed Statistics_state from TABLE_STATISTICS_CB as this is not needed anymore as wer are not changing TABLE_SHARE->stats_cb while calculating or loading statistics. - Store values used with store_from_statistical_minmax_field() in TABLE_STATISTICS_CB::mem_root. This allowed me to remove the function delete_stat_values_for_table_share(). - Field_blob::store_from_statistical_minmax_field() is implemented but is not normally used as we do not yet support EIS statistics for blobs. For example Field_blob::update_min() and Field_blob::update_max() are not implemented. Note that the function can be called if there is an concurrent "ALTER TABLE MODIFY field BLOB" running because of a bug in ALTER TABLE where it deletes entries from column_stats before it has an exclusive lock on the table. - Use result of field->val_str(&val) as a pointer to the result instead of val (safetly fix). - Allocate memory for collected statistics in THD::mem_root, not in in TABLE::mem_root. This could cause the TABLE object to grow if a ANALYZE TABLE was run many times on the same table. This was done in allocate_statistics_for_table(), create_min_max_statistical_fields_for_table() and create_min_max_statistical_fields_for_table_share(). - Store in TABLE_STATISTICS_CB::stats_available which statistics was found in the statistics tables. - Removed index_table from class Index_prefix_calc as it was not used. - Added TABLE_SHARE::LOCK_statistics to ensure we don't load EITS in parallel. First thread will load it, others will reuse the loaded data. - Eliminate read_histograms_for_table(). The loading happens within read_statistics_for_tables() if histograms are needed. One downside is that if we have read statistics without histograms before and someone requires histograms, we have to read all statistics again (once) from the statistics tables. A smaller downside is the need to call alloc_root() for each individual histogram. Before we could allocate all the space for histograms with a single alloc_root. - Fixed bug in MyISAM and Aria where they did not properly notice that table had changed after analyze table. This was not a problem before this patch as then the MyISAM and Aria tables where flushed as part of ANALYZE table which did hide this issue. - Fixed a bug in ANALYZE table where table->records could be seen as 0 in collect_statistics_for_table(). The effect of this unlikely bug was that a full table scan could be done even if analyze_sample_percentage was not set to 1. - Changed multiple mallocs in a row to use multi_alloc_root(). - Added a mutex protection in update_statistics_for_table() to ensure that several tables are not updating the statistics at the same time. Some of the changes in sql_statistics.cc are based on a patch from Oleg Smirnov <olernov@gmail.com> Co-authored-by: Oleg Smirnov <olernov@gmail.com> Co-authored-by: Vicentiu Ciorbaru <cvicentiu@gmail.com> Reviewer: Sergei Petrunia <sergey@mariadb.com>
-
- 17 Aug, 2023 4 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
trx_undo_write_xid(): Correct an off-by-one error in a debug assertion.
-
Marko Mäkelä authored
buf_read_page_low(): Remove an error message that could be triggered by buf_read_ahead_linear() or buf_read_ahead_random(). This is a backport of commit c9eff1a1 from MariaDB Server 10.5.
-
Marko Mäkelä authored
buf_read_ahead_random(), buf_read_ahead_linear(): Avoid read-ahead of the last page(s) of ROW_FORMAT=COMPRESSED tablespaces that use a page size of 1024 or 2048 bytes. We invoke os_file_set_size() on integer multiples of 4096 bytes in order to be compatible with the requirements of innodb_flush_method=O_DIRECT regardless of the physical block size of the underlying storage. This change must be null-merged to MariaDB Server 10.5 and later. There, out-of-bounds read-ahead should be handled gracefully by simply discarding the buffer page that had been allocated. Tested by: Matthias Leich
-
- 16 Aug, 2023 2 commits
-
-
Kristian Nielsen authored
When the SQL driver thread goes to wait for room in the parallel slave worker queue, there was a race where a kill at the right moment could be ignored and the wait proceed uninterrupted by the kill. Fix by moving the THD::check_killed() to occur _after_ doing ENTER_COND(). This bug was seen as sporadic failure of the testcase rpl.rpl_parallel (rpl.rpl_parallel_gco_wait_kill since 10.5), with "Slave stopped with wrong error code". Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
-
Alexander Barkov authored
Something went wrong during a merge (from 10.5 to 10.6) of 68403eed (fixing bugs MDEV-27207 and MDEV-31719). Originally (in 10.5) the fix was done in_inet6::set() in plugin/type_inet/sql_type_inet.cc. In 10.6 this code resides in a different place: in the method in_fbt::set() of a template class in sql/sql_type_fixedbin.h. During the merge: - the fix did not properly migrate to in_fbt::set() - the related MTR tests disappeared This patch fixes in_fbt::set() properly and restores MTR tests.
-
- 15 Aug, 2023 15 commits
-
-
Monty authored
The problem is that the first execution of the prepared statement makes a permanent optimization of converting the LEFT JOIN to an INNER JOIN. This is based on the assumption that all the user parameters (?) are always constants and that parameters to Item_cond() will not change value from true and false between different executions. (The example was using IS NULL, which will change value if parameter depending on if the parameter is NULL or not). The fix is to change Item_cond::fix_fields() and Item_cond::eval_not_null_tables() to not threat user parameters as constants. This will ensure that we don't do the LEFT_JOIN -> INNER JOIN conversion that causes problems. There is also some things that needs to be improved regarding calculations of not_null_tables_cache as we get a different value for WHERE 1 or t1.a=1 compared to WHERE t1.a= or 1 Changes done: - Mark Item_param with the PARAM flag to be able to quickly check in Item_cond::eval_not_null_tables() if an item contains a prepared statement parameter (just like we check for stored procedure parameters). - Fixed that Item_cond::not_null_tables_cache is not depending on order of arguments. - Don't call item->eval_const_cond() for items that are NOT on the top level of the WHERE clause. This removed a lot of unnecessary warnings in the test suite! - Do not reset not_null_tables_cache for not top level items. - Simplified Item_cond::fix_fields by calling eval_not_null_tables() instead of having duplication of all the code in eval_not_null_tables(). - Return an error if Item_cond::fix_field() generates an error The old code did generate an error in some cases, but not in all cases. - Fixed all handling of the above error in make_cond_for_tables(). The error handling by the callers did not exists before which could lead to asserts in many different places in the old code). - All changes in sql_select.cc are just checking the return value of fix_fields() and make_cond_for_tables() and returning an error value if fix_fields() returns true or make_cond_for_tables() returns NULL and is_error() is set. - Mark Item_cond as const_item if all arguments returns true for can_eval_in_optimize(). Reviewer: Sergei Petrunia <sergey@mariadb.com>
-
Kristian Nielsen authored
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
-
Kristian Nielsen authored
Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
-
Kristian Nielsen authored
Remove the exception that InnoDB does not report auto-increment locks waits to the parallel replication. There was an assumption that these waits could not cause conflicts with in-order parallel replication and thus need not be reported. However, this assumption is wrong and it is possible to get conflicts that lead to hangs for the duration of --innodb-lock-wait-timeout. This can be seen with three transactions: 1. T1 is waiting for T3 on an autoinc lock 2. T2 is waiting for T1 to commit 3. T3 is waiting on a normal row lock held by T2 Here, T3 needs to be deadlock killed on the wait by T1. Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
-
Kristian Nielsen authored
Restore code to make InnoDB choose the second transaction as a deadlock victim if two transactions deadlock that need to commit in-order for parallel replication. This code was erroneously removed when VATS was implemented in InnoDB. Also add a test case for InnoDB choosing the right deadlock victim. Also fixes this bug, with testcase that reliably reproduces: MDEV-28776: rpl.rpl_mark_optimize_tbl_ddl fails with timeout on sync_with_master Reviewed-by: Marko Mäkelä <marko.makela@mariadb.com> Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
-
Kristian Nielsen authored
Restore code to make InnoDB choose the second transaction as a deadlock victim if two transactions deadlock that need to commit in-order for parallel replication. This code was erroneously removed when VATS was implemented in InnoDB. Also add a test case for InnoDB choosing the right deadlock victim. Also fixes this bug, with testcase that reliably reproduces: MDEV-28776: rpl.rpl_mark_optimize_tbl_ddl fails with timeout on sync_with_master Note: This should be null-merged to 10.6, as a different fix is needed there due to InnoDB locking code changes. Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
-
Kristian Nielsen authored
Remove the exception that InnoDB does not report auto-increment locks waits to the parallel replication. There was an assumption that these waits could not cause conflicts with in-order parallel replication and thus need not be reported. However, this assumption is wrong and it is possible to get conflicts that lead to hangs for the duration of --innodb-lock-wait-timeout. This can be seen with three transactions: 1. T1 is waiting for T3 on an autoinc lock 2. T2 is waiting for T1 to commit 3. T3 is waiting on a normal row lock held by T2 Here, T3 needs to be deadlock killed on the wait by T1. Note: This should be null-merged to 10.6, as a different fix is needed there due to InnoDB lock code changes. Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
-
Marko Mäkelä authored
The test innodb.alter_rename_files rather frequently hangs in checkpoint_set_now. The test was removed in MariaDB Server 10.5 commit 37e7bde1 when the code that it aimed to cover was simplified. Starting with MariaDB Server 10.5 the page flushing and log checkpointing is much simpler, handled by the single buf_flush_page_cleaner() thread. Let us remove the test to avoid occasional failures. We are not going to fix the cause of the failure in MariaDB Server 10.4.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Alexander Barkov authored
Fix issue was earlier fixed by MDEV-31724. Only adding MTR tests.
-
Alexander Barkov authored
Field_varstring::get_copy_func() did not take into account that functions do_varstring1[_mb], do_varstring2[_mb] do not support compressed data. Changing the return value of Field_varstring::get_copy_func() to `do_field_string` if there is a compresion and truncation at the same time. This fixes the problem, so now it works as follows: - val_str() uncompresses the data - The prefix is then calculated on the uncompressed data Additionally, introducing two new copying functions - do_varstring1_no_truncation() - do_varstring2_no_truncation() Using new copying functions in cases when: - a Field_varstring with length_bytes==1 is changing to a longer Field_varstring with length_bytes==1 - a Field_varstring with length_bytes==2 is changing to a longer Field_varstring with length_bytes==2 In these cases we don't care neither of compression nor of multi-byte prefixes: the entire data gets fully copied from the source column to the target column as is. This is a kind of new optimization, but this also was needed to preserve existing MTR test results.
-
- 14 Aug, 2023 4 commits
-
-
Daniel Bartholomew authored
-
Daniel Bartholomew authored
-
Daniel Bartholomew authored
-
Marko Mäkelä authored
MYSQL_SYSVAR_ENUM(checksum_algorithm): Correct the documentation string. Fixes up commit 7a4fbb55 (MDEV-25105).
-
- 11 Aug, 2023 1 commit
-
-
Julius Goryavsky authored
-
- 10 Aug, 2023 6 commits
-
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
Monty authored
This is also related to MDEV-31348 Assertion `last_key_entry >= end_pos' failed in virtual bool JOIN_CACHE_HASHED::put_record() Valgrind exposed a problem with the join_cache for hash joins: =25636== Conditional jump or move depends on uninitialised value(s) ==25636== at 0xA8FF4E: JOIN_CACHE_HASHED::init_hash_table() (sql_join_cache.cc:2901) The reason for this was that avg_record_length contained a random value if one had used SET optimizer_switch='optimize_join_buffer_size=off'. This causes either 'random size' memory to be allocated (up to join_buffer_size) which can increase memory usage or, if avg_record_length is less than the row size, memory overwrites in thd->mem_root, which is bad. Fixed by setting avg_record_length in JOIN_CACHE_HASHED::init() before it's used. There is no test case for MDEV-31893 as valgrind of join_cache_notasan checks that. I added a test case for MDEV-31348.
-
Kristian Nielsen authored
The test case accessed slave-relay-bin.000003 without waiting for the IO thread to write it first. If the IO thread was slow, this could fail. Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
-
Kristian Nielsen authored
Revert the old work-around for buggy fdatasync() on Linux ext3. This bug was fixed in Linux > 10 years ago back to kernel version at least 3.0. Reviewed-by: Marko Mäkelä <marko.makela@mariadb.com> Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
-
Monty authored
This is also related to MDEV-31348 Assertion `last_key_entry >= end_pos' failed in virtual bool JOIN_CACHE_HASHED::put_record() Valgrind exposed a problem with the join_cache for hash joins: =25636== Conditional jump or move depends on uninitialised value(s) ==25636== at 0xA8FF4E: JOIN_CACHE_HASHED::init_hash_table() (sql_join_cache.cc:2901) The reason for this was that avg_record_length contained a random value if one had used SET optimizer_switch='optimize_join_buffer_size=off'. This causes either 'random size' memory to be allocated (up to join_buffer_size) which can increase memory usage or, if avg_record_length is less than the row size, memory overwrites in thd->mem_root, which is bad. Fixed by setting avg_record_length in JOIN_CACHE_HASHED::init() before it's used. There is no test case for MDEV-31893 as valgrind of join_cache_notasan checks that. I added a test case for MDEV-31348.
-
- 09 Aug, 2023 2 commits
-
-
Sergei Petrunia authored
Remove redundant delete_explain_query() calls in sp_instr_set::exec_core(), sp_instr_set_row_field::exec_core(), sp_instr_set_row_field_by_name::exec_core(). These calls are made before the SP instruction's tables are "closed" by close_thread_tables() call. When we call close_thread_tables() after that, we no longer can collect engine's counter variables, as they use the data structures that are located in the Explain Data Structures. Also, these delete_explain_query() calls are redundant, as sp_lex_keeper::reset_lex_and_exec_core() has another delete_explain_query() call, which is located in the right location after the close_thread_tables() call.
-
Jan Lindström authored
MDEV-31737 : Node never returns from Donor/Desynced to Synced when wsrep_mode = BF_ABORT_MARIABACKUP Problem was incorrect condition when node should have resumed and resync at backup_end. Simplified condition to fix the problem and added missing test case for this wsrep_mode = BF_ABORT_MARIABACKUP. Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
-
- 08 Aug, 2023 3 commits
-
-
Andrew Hutchings authored
This reverts commit 6c405904.
-
Andrew Hutchings authored
This reverts commit b54e4bf0.
-
Oleksandr Byelkin authored
-