- 10 Mar, 2023 2 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
update columnstore
-
- 07 Mar, 2023 2 commits
-
-
Monty authored
Removed an old '* 2' from the HASH join cost. This was made obsolete by a later patch that added cost for copying the data out from the join buffer to table->record. I also added some 'echo' to some test cases to make it easier to debug test case changes. Test case changes: - subselect3_jcl6 and subselect_sj2_jcl6 result changes as materialized tables changed to hash join + first_match
-
Monty authored
Firstmatch_picker::check_qep() has an optimization that allows firstmatch to be used together with join buffer under some conditions. In this case the cost was assumed to be same as what best_access_path() had calculated. However if HASH+join_buffer was used, then fix_semijoin_strategies_for_picked_join_order() would remove the join_buffer (which would cause a full join to be used) and the cost assumption by Firstmatch_picker::check_qep() would be wrong. Later check_join_cache_usage() sees that it's a full scan and decides it can use join buffering, (But not the hash join). Fixed by also allowing HASH joins with firstmatch. This removes the need to change disable and re-enable join buffer. Test case changes: - HASH join used with firstmatch (Using join buffer (flat, BNLH join)) - Filtered could change with firstmatch as the conversion with and without join_buffered lost the filtering information. - The not "re-enabling join buffer" is shown in main.optimizer_trace Original code by Sergei, optimized by Monty. Author: Sergei Petrunia <sergey@mariadb.com>, monty@mariadb.org
-
- 06 Mar, 2023 9 commits
-
-
Marko Mäkelä authored
-
Hugo Wen authored
AWS KMS plugin saves all key files under the root folder of data directory. Increasing of the key IDs and key rotations will generate a lot of key files under the root folder, looks messy and hard to maintain the folder permission etc. Now introduce a new plugin parameter `aws_key_management_keyfile_dir` to define the directory for saving the key files for better maintenance. Detailed parameter information as following: ``` VARIABLE_NAME: AWS_KEY_MANAGEMENT_KEYFILE_DIR SESSION_VALUE: NULL GLOBAL_VALUE: <Directory path> GLOBAL_VALUE_ORIGIN: COMMAND-LINE DEFAULT_VALUE: VARIABLE_SCOPE: GLOBAL VARIABLE_TYPE: VARCHAR VARIABLE_COMMENT: Define the directory in which to save key files for the AWS key management plugin. If not set, the root datadir will be used READ_ONLY: YES COMMAND_LINE_ARGUMENT: REQUIRED GLOBAL_VALUE_PATH: NULL ``` All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services, Inc.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
This fixes up commit 57c526ff
-
Andrew Hutchings authored
If you are running mariadb-install-db from a source tree instead of installation it was executing `mysqld` instead of `mariadb` which showed the deprecation warning. This patch fixes that as well as fixing messages and links to other things that have been renamed.
-
- 03 Mar, 2023 3 commits
-
-
Monty authored
This stabilizes main.order_by_optimizer_innodb, where the result varies depending on the rec_per_key status from the engine. The logic to prefer range over a const ref: - If range of has only one part and it uses more key parts than ref, then use the range. Example: WHERE key_part1=1 and key_part2 > # Here we will prefer a range over (key_part1,key_part2) instead a ref over key_part1.
-
Marko Mäkelä authored
This fixes a regression due to MDEV-19229. InnoDB would fail to maintain the maximum transaction ID when it changes and reinitializes the number of undo tablespaces. InnoDB should maintain the maximum transaction ID in TRX_RSEG_MAX_TRX_ID of system rollback segment header. srv_undo_tablespaces_reinit(): Preserve the system-wide maximum transaction identifier in the TRX_RSEG_MAX_TRX_ID field of the first rollback segment. If needed, upgrade the page to the MariaDB 10.3 format first. All this must be done in the same atomic mini-transaction that will reinitialize the TRX_SYS page. Before MariaDB Server 10.3, InnoDB persisted the maximum transaction identifier only in the TRX_SYS page. MariaDB 10.3 started to treat that page as a read-only directory of rollback segments, and the maximum transaction identifier will be recovered from TRX_RSEG_MAX_TRX_ID or from undo logs. Since a change of innodb_undo_tablespaces is only allowed when no undo log records exist, the only place to store the persistent maximum transaction identifier is in TRX_RSEG_MAX_TRX_ID of one of the rollback segment header pages. The bug was observed when the database was upgraded directly from MySQL 5.7 or earlier, or from MariaDB Server 10.2 or earlier, to multiple innodb_undo_tablespaces. On a restart of MariaDB after the upgrade, the transaction identifier would be reported to be smaller than during the upgrade: 2023-03-03 10:43:57 0 [Note] InnoDB: log sequence number 2762352; transaction id 1794 2023-03-03 10:44:17 0 [Note] InnoDB: log sequence number 2786076; transaction id 770
-
Alexander Barkov authored
Adding "const" qualifiers to casefold_info_st::page
-
- 02 Mar, 2023 9 commits
-
-
Monty authored
This error was discovered while working on MDEV-30540 Wrong result with IN list length reaching IN_PREDICATE_CONVERSION_THRESHOLD If there is read error from handler::ha_rnd_next() during a recursive query, st_select_lex_unit::exec_recursive() will crash as it will try to get the error code from a structure that was deleted by the callee. The code was using the construct: sl->join->exec(); saved_error=sl->join->error; This does not work as sl->join was freed by the exec() and sl->join would be set to 0. Fixed by having JOIN::exec() return the error code. The included test case simulates the error in ha_rnd_next(), which causes a crash without the patch. scovered whle working on MDEV-30540 Wrong result with IN list length reaching IN_PREDICATE_CONVERSION_THRESHOLD If there is read error from handler::ha_rnd_next() during a recursive query, st_select_lex_unit::exec_recursive() will crash as it will try to get the error code from a structure that was deleted by the callee. The code was using the construct: sl->join->exec(); saved_error=sl->join->error; This does not work as sl->join was freed by the exec() and sl->join was set to 0. Fixed by having JOIN::exec() return the error code. The included test case simulates the error in ha_rnd_next(), which causes a crash without the patch.
-
Monty authored
This error was discovered while working on MDEV-30540 Wrong result with IN list length reaching IN_PREDICATE_CONVERSION_THRESHOLD Failing test: cte_recursive.test If one writes to a file, then truncates it and then call mmap() over the file_size + 7, then the file size changes to 7. (On Linux mmap() does not change file size). This caused _ma_read_rnd_dynamic_record() to believe that there are more records in the data file, which is not the case, and the table will be marked as corrupted. Fixed by disabling mmap() in Aria on Windows.
-
Monty authored
The problem was the mysql_derived_prepare() did not correctly set 'distinct' when creating a temporary derivated table. Fixed by separating checking for distinct for queries with and without UNION. Other things: - Fixed bug in generate_derived_keys_for_table() where we set the wrong bit for join_tab->keys - Cleaned up JOIN::drop_unused_derived_keys() - Changed TABLE::use_index() to keep unique keys and update share->key_parts Author: Sergei Petrunia <sergey@mariadb.com>, monty@mariadb.org
-
Monty authored
-
Monty authored
- This was just a small performance issue, not a crashing bug.
-
Monty authored
- Remove DBUG calls from my_winfile.c where call and parameters are already printed by mysys. - Remove DBUG from my_get_osfhandle() and my_get_open_flags() to remove DBUG noise. - Updated convert-debug-for-diff to take into account windows. - Changed some DBUG_RETURN(function()) to tmp=function(); DBUG_RETURN(tmp); This is needed as Visual C++ prints for DBUG binaries a trace for func_a() { DBUG_ENTER("func_a"); DBUG_RETURN(func_b()) } as >func_a <func_a >func_b <func_b instead of when using gcc: >func_a | >func_b | <func_b <func_a
-
Thirunarayanan Balathandayuthapani authored
- InnoDB fails to reset the check_foreigns and check_unique_secondary in trx_t::free(), trx_t::commit_cleanup(). This lead to bulk insert in internal innodb fts table operation.
-
Thirunarayanan Balathandayuthapani authored
cmp_dtuple_rec_with_match_bytes - InnoDB shouldn't use the adaptive hash index for change buffer indexes.
-
Daniel Black authored
Use MariaDB named executables. Also remove unnecessary slave references. rename 50-mysql-clients.cnf -> 50-mariadb-clients.cnf 50-mysqld_safe.cnf -> 50-mariadb_safe.cnf
-
- 28 Feb, 2023 9 commits
-
-
Sergei Golubchik authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Starting with commit 0de3be8c (MDEV-30671), the field TRX_UNDO_NEEDS_PURGE lost its previous meaning. The following scenario is possible: (1) InnoDB is killed at a point of time corresponding to the durable execution of some fseg_free_step_not_header() but not trx_purge_remove_log_hdr(). (2) After restart, the affected pages are allocated for something else. (3) Purge will attempt to access the newly reallocated pages when looking for some old undo log records. trx_purge_free_segment(): Invoke trx_purge_remove_log_hdr() as the first thing, to be safe. If the server is killed, some pages will never be freed. That is the lesser evil. Also, before each mtr.start(), invoke log_free_check() to prevent ib_logfile0 overrun.
-
Marko Mäkelä authored
Because downgrades from 11.0 to older MariaDB server are not possible due to the removal of the InnoDB change buffer, there is no need to access the field TRX_UNDO_NEEDS_PURGE anymore.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
- 27 Feb, 2023 3 commits
-
-
Monty authored
This patch also fixes some bugs detected by valgrind after this patch: - Not enough copy_func elements was allocated by Create_tmp_table() which causes an memory overwrite in Create_tmp_table::add_fields() I added an ASSERT() to be able to detect this also without valgrind. The bug was that TMP_TABLE_PARAM::copy_fields was not correctly set when calling create_tmp_table(). - Aria::empty_bits is not allocated if there is no varchar/char/blob fields in the table. Fixed code to take this into account. This cannot cause any issues as this is just a memory access into other Aria memory and the content of the memory would not be used. - Aria::last_key_buff was not allocated big enough. This may have caused issues with rtrees and ma_extra(HA_EXTRA_REMEMBER_POS) as they would use the same memory area. - Aria and MyISAM didn't take extended key parts into account, which caused problems when copying rec_per_key from engine to sql level. - Mark asan builds with 'asan' in version strihng to detect these in not_valgrind_build.inc. This is needed to not have main.sp-no-valgrind fail with asan.
-
Monty authored
The test failed if one had disabled some engines during compilation
-
Marko Mäkelä authored
-
- 26 Feb, 2023 1 commit
-
-
Monty authored
This was discovered as part of adding a protected memory area between each area allocated by multi_alloc(). The patch that adds the protection will be pushed in 10.5. This patch adds fixes that are unique for 10.10
-
- 24 Feb, 2023 1 commit
-
-
Marko Mäkelä authored
It is not safe to invoke trx_purge_free_segment() or execute innodb_undo_log_truncate=ON before all undo log records in the rollback segment has been processed. A prominent failure that would occur due to premature freeing of undo log pages is that trx_undo_get_undo_rec() would crash when trying to copy an undo log record to fetch the previous version of a record. If trx_undo_get_undo_rec() was not invoked in the unlucky time frame, then the symptom would be that some committed transaction history is never removed. This would be detected by CHECK TABLE...EXTENDED that was impleented in commit ab019010. Such a garbage collection leak should be possible even when using innodb_undo_log_truncate=OFF, just involving trx_purge_free_segment(). trx_rseg_t::needs_purge: Change the type from Boolean to a transaction identifier, noting the most recent non-purged transaction, or 0 if everything has been purged. On transaction start, we initialize this to 1 more than the transaction start ID. On recovery, the field may be adjusted to the transaction end ID (TRX_UNDO_TRX_NO) if it is larger. The field TRX_UNDO_NEEDS_PURGE becomes write-only; only some debug assertions that would validate the value. The field reflects the old inaccurate Boolean field trx_rseg_t::needs_purge. trx_undo_mem_create_at_db_start(), trx_undo_lists_init(), trx_rseg_mem_restore(): Remove the parameter max_trx_id. Instead, store the maximum in trx_rseg_t::needs_purge, where trx_rseg_array_init() will find it. trx_purge_free_segment(): Contiguously hold a lock on trx_rseg_t to prevent any concurrent allocation of undo log. trx_purge_truncate_rseg_history(): Only invoke trx_purge_free_segment() if the rollback segment is empty and there are no pending transactions associated with it. trx_purge_truncate_history(): Only proceed with innodb_undo_log_truncate=ON if trx_rseg_t::needs_purge indicates that all history has been purged. Tested by: Matthias Leich
-
- 23 Feb, 2023 1 commit
-
-
Alexander Barkov authored
The array my_unicase_pages_unicode520[7] erroneously mapped to plane06 instead of plane07.
-