- 08 Feb, 2024 5 commits
-
-
Daniel Black authored
Recording both is useful on a replication relay when the backup can be used to replace the server, or ack as a new replica to the server. If an option=2, commented is selected, allow the alternate option to exist. This still disables --dump-slave=1 --master-data=1 as having the a CHANGE MASTER TO and START SLAVE on different positions would be confusing and dangerious to the try to execute the output. The previous behaviour of silently disabling --master-data occurs in this case. The commented code related to --dump-slave/--master-data is greatly expanded for human consumption. A redundant opt_slave_data= 0 was removed from get_opts. If --dump-slave=1 or 2, then the only possible value of --master-data is a valid one. Re-order to preference gtid based replication. Based of code from Elena Stepanova. Review by: Brandon Nesterenko and Anel Husakovic
-
Daniel Black authored
This will make it easier to show changes.
-
mariadb-DebarunBanerjee authored
If we fail to open a tablespace while looking for FILE_CHECKPOINT, we set the corruption flag. Specifically, if encryption key is missing, we would not be able to open an encrypted tablespace and the flag could be set. We miss checking for this flag and report "Missing FILE_CHECKPOINT" Address review comment to improve the test. Flush pages before starting no-checkpoint block. It should improve the number of cases where the test is skipped because some intermediate checkpoint is triggered.
-
Daniel Lenski authored
Fix inconsistent definition of PERFORMANCE_SCHEMA.REPLICATION_APPLIER_STATUS.COUNT_TRANSACTIONS_RETRIES column This column (`COUNT_TRANSACTIONS_RETRIES`) is defined as `BIGINT UNSIGNED` (64-bit unsigned integer) in the user-visible SQL definition: https://github.com/MariaDB/server/blob/182ff21ace34ea4f00fb5b66689b172323d91f99/storage/perfschema/table_replication_applier_status.cc#L66 "COUNT_TRANSACTIONS_RETRIES BIGINT unsigned not null comment 'The number of retries that were made because the replication SQL thread failed to apply a transaction.'," And its value is internally set/updated using the `set_field_ulonglong` function: https://github.com/MariaDB/server/blob/182ff21ace34ea4f00fb5b66689b172323d91f99/storage/perfschema/table_replication_applier_status.cc#L231-L233 case 3: /* total number of times transactions were retried */ set_field_ulonglong(f, m_row.count_transactions_retries); break; … but the structure where it is stored allocates only `ulong` for it: https://github.com/MariaDB/server/blob/182ff21ace34ea4f00fb5b66689b172323d91f99/storage/perfschema/table_replication_applier_status.h#L62 ulong count_transactions_retries; As a result of this inconsistency: 1. On any platform where `ulong` is `uint32_t` and `ulonglong` is `uint64_t`, setting this value would corrupt the 4 bytes of memory *following* the 4 bytes actually allocated for it. Likely this problem was never noticed because this is the final element in the structure, and the structure is padded by the compiler to prevent memory corruption errors. 2. On any BIG-ENDIAN platform where `ulong` is `uint32_t` and `ulonglong` is `uint64_t`, reading back the value of this column will result in total garbage. Likely this problem was never noticed because MariaDB has not been tested on 32-bit big-endian platforms. In order not to affect the user-visible/SQL definition of this column, the correct way to fix this issue is to change it to `ulonglong` in the structure definition. See https://github.com/MariaDB/server/pull/2763/files#r1329110832 for the original identification and discussion of this issue. All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services
-
Robin Newhouse authored
As was done in dc771113 for `support-files/CMakeLists.txt` Do not rely on existence of `CMakeFiles/${target}.dir` directory existence. It is not there for custom targets in Ninja build. This regression was introduced in #1131 which likely copied the pattern from e79e8406 before that regression was addressed in dc771113. All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services.
-
- 07 Feb, 2024 5 commits
-
-
mariadb-DebarunBanerjee authored
mariadb-backup with --prepare option could result in empty redo log file. When --prepare is followed by --prepare --export, we exit early in srv_start function without opening the ibdata1 tablespace. Later while trying to read rollback segment header page, we hit the debug assert which claims that the system space should already have been opened. There are two assert cases here. Issue-1: System tablespace object is not there in fil space hash i.e. srv_sys_space.open_or_create() is not called. Issue-2: The system tablespace data file ibdata1 is not opened i.e. fil_system.sys_space->open() is not called. Fix: For empty redo log and restore operation, open system tablespace before returning.
-
Vlad Lesin authored
THE FIX MUST NOT BE MERGED TO 10.6+, BECAUSE 10.6+ IS NOT AFFECTED! The test is waiting for delete-marked record purging. But this does not happen under the following conditions: 1. "START TRANSACTION WITH CONSISTENT SNAPSHOT" - is active, has not been rolled back yet 2. "DELETE FROM t WHERE b = 20 # trx_1" - is committed 3. "INSERT INTO t VALUES(10, 20) # trx_2" - hanging on "ib_after_row_insert" sync point, waiting for "first_ins_cont" signal 4. "DELETE FROM t WHERE b = 20 # trx_3" - blocked on delete-marked by trx_1 record, waiting for trx_2 5. connection "default" is waiting on 'now WAIT_FOR row_purge_del_mark_finished' purge_coordinator_callback_low() sets purge_state.m_history_length= srv_do_purge(&n_total_purged); even if nothing was purged, like in our case. Nothing was purged because transaction with consistent snapshot was still alive during purging procedure. Then purge_coordinator_timer_callback() does not wake purge thread if the following condition is true: purge_state.m_history_length == trx_sys.rseg_history_len The above condition is true for our case, because we are waiting for delete-marked record purging, and trx_sys.rseg_history_len does not grow. Only 10.5 is affected, because there is no such condition in 10.6, i.e. purge thread is woken up even if history size was not changed during purge coordinator thread suspending. The easiest way to fix it is just to remove the test from 10.5.
-
Thirunarayanan Balathandayuthapani authored
- Failed to reset the innodb_fil_make_page_dirty_debug variable in innodb_saved_page_number_debug_basic test case.
-
Yuchen Pei authored
-
Daniel Black authored
Without an OS error it could one of the many errors from write.
-
- 06 Feb, 2024 3 commits
-
-
Oleksandr Byelkin authored
-
Daniel Bartholomew authored
-
mariadb-DebarunBanerjee authored
MDEV-18288 Transportable Tablespaces leave AUTO_INCREMENT in mismatched state, causing INSERT errors in newly imported tables when .cfg is not used. During import, if cfg file is not specified, we don't update the autoinc field in innodb dictionary object dict_table_t. The next insert tries to insert from the starting position of auto increment and fails. It can be observed that the issue is resolved once server is restarted as the persistent value is read correctly from PAGE_ROOT_AUTO_INC from index root page. The patch fixes the issue by reading the the auto increment value directly from PAGE_ROOT_AUTO_INC during import if cfg file is not specified. Test Fix: 1. import_bugs.test: Embedded mode warning has absolute path. Regular expression replacement in test. 2. full_crc32_import.test: Table level auto increment mismatch after import. It was using the auto increment data from the table prior to discard and import which is not right. This value has cached auto increment value higher than the actual inserted value and value stored in PAGE_ROOT_AUTO_INC. Updated the result file and added validation for checking the maximum value of auto increment column.
-
- 04 Feb, 2024 1 commit
-
-
Igor Babaev authored
This bug led to wrong result sets returned by the second execution of prepared statements from selects using mergeable derived tables pushed into external engine. Such derived tables are always materialized. The decision that they have to be materialized is taken late in the function mysql_derived_optimized(). For regular derived tables this decision is usually taken at the prepare phase. However in some cases for some derived tables this decision is made in mysql_derived_optimized() too. It can be seen in the code of mysql_derived_fill() that for such a derived table it's critical to change its translation table to tune it to the fields of the temporary table used for materialization of the derived table and this must be done after each refill of the derived table. The same actions are needed for derived tables pushed into external engines. Approved by Oleksandr Byelkin <sanja@mariadb.com>
-
- 02 Feb, 2024 3 commits
-
-
Sergei Petrunia authored
@@ -314,7 +314,7 @@ select straight_join * from t0, part ignore index (primary) where p_partkey=t0.a and p_size=1; id select_type table type possible_keys key key_len ref rows Extra -1 SIMPLE t0 ALL NULL NULL NULL NULL 5 Using where +1 SIMPLE t0 ALL NULL NULL NULL NULL 6 Using where 1 SIMPLE part eq_ref i_p_size i_p_size 9 const,dbt3_s001.t0.a 1
-
Sergei Petrunia authored
Variant#3: moved the logic out of create_key_parts_for_pseudo_indexes Range Analyzer (get_mm_tree functions) can only process up to MAX_KEY=64 indexes. The problem was that calculate_cond_selectivity_for_table used it to estimate selectivities for columns, and since a table can have > MAX_KEY columns, would invoke Range Analyzer with more than MAX_KEY "pseudo-indexes". Fixed by making calculate_cond_selectivity_for_table() to run Range Analyzer with at most MAX_KEY pseudo-indexes. If there are more columns to process, Range Analyzer will be invoked multiple times. Also made this change: - param.real_keynr[0]= 0; + MEM_UNDEFINED(¶m.real_keynr, sizeof(param.real_keynr)); Range Analyzer should have no use on real_keynr when it is run with pseudo-indexes.
-
Alexander Barkov authored
mariadb-backup: Adding a function get_os_user() to detect the OS user name if the user name is not specified, to make mariadb-backup: - work like MariaDB client tools work - match its --help page, which says: -u, --user=name This option specifies the username used when connecting to the server, if that's not the current user.
-
- 01 Feb, 2024 2 commits
-
-
Alexander Barkov authored
MDEV-33355 Add a Galera-2-node-to-MariaDB replication MTR test cloning the slave with mariadb-backup Replication from a 2-node Galera cluster to a regular MariaDB server. Cloning the slave using mariadb-backup.
-
Alexander Barkov authored
-
- 31 Jan, 2024 6 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Nikita Malyavin authored
According to the standard, the autoincrement column (i.e. *identity column*) should be advanced each insert implicitly made by UPDATE/DELETE ... FOR PORTION. This is very unconvenient use in several notable cases. Concider a WITHOUT OVERLAPS key with an autoinc column: id int auto_increment, unique(id, p without overlaps) An update or delete with FOR PORTION creates a sense that id will remain unchanged in such case. The standard's IDENTITY reminds MariaDB's AUTO_INCREMENT, however the generation rules differ in many ways. For example, there's also a notion autoincrement index, which is bound to the autoincrement field. We will define our own generation rule for the PORTION OF operations involving AUTO_INCREMENT: * If an autoincrement index contains WITHOUT OVERLAPS specification, then a new value should not be generated, otherwise it should. Apart from WITHOUT OVERLAPS there is also another notable case, referred by the reporter - a unique key that has an autoincrement column and a field from the period specification: id int auto_increment, unique(id, s), period for p(s, e) for this case, no exception is made, and the autoincrementing rules will be proceeded accordung to the standard (i.e. the value will be advanced on implicit inserts).
-
Sergei Golubchik authored
-
Sergei Golubchik authored
test disabled, until fixed
-
Thirunarayanan Balathandayuthapani authored
Reason: ====== undo_space_dblwr test case fails if the first page of undo tablespace is not flushed before restart the server. While restarting the server, InnoDB fails to detect the first page of undo tablespace from doublewrite buffer. Fix: === Use "ib_log_checkpoint_avoid_hard" debug sync point to avoid checkpoint and make sure to flush the dirtied page before killing the server. innodb_make_page_dirty(): Fails to set srv_fil_make_page_dirty_debug variable.
-
- 30 Jan, 2024 3 commits
-
-
Oleksandr Byelkin authored
-
Brandon Nesterenko authored
rpl.rpl_seconds_behind_master_spike uses the DEBUG_SYNC mechanism to count how many format descriptor events (FDEs) have been executed, to attempt to pause on a specific relay log FDE after executing transactions. However, depending on when the IO thread is stopped, it can send an extra FDE before sending the transactions, forcing the test to pause before executing any transactions, resulting in a table not existing, that is attempted to be read for COUNT. This patch fixes this by no longer counting FDEs, but rather by programmatically waiting until the SQL thread has executed the transaction and then automatically activating the DEBUG_SYNC point to trigger at the next relay log FDE.
-
Tuukka Pasanen authored
There is no need for CMAKE_SYSTEM_PROCESSOR parameter in Debian build as dh_auto_configure should handle things better and more reliable
-
- 26 Jan, 2024 3 commits
-
-
Brandon Nesterenko authored
A debug_sync signal could remain for the SQL thread that should have begun a wait_for upon seeing a GTID event, but would instead see the old signal and continue on without waiting. This broke an "idle" condition in SHOW SLAVE STATUS which should have automatically negated Seconds_Behind_Master. Instead, because the SQL thread had already processed the GTID event, it set sql_thread_caught_up to false, and thereby calculated the value of Seconds_behind_master, when the test expected 0. This patch fixes this by resetting the debug_sync state before creating a new transaction which sends a GTID event to the replica
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
-
- 25 Jan, 2024 3 commits
-
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
With this patch, 4-component MSI version can be used, e.g by setting TINY_VERSION variable in CMake, or by adding a string, e.g MYSQL_VERSION_EXTRA=-2 which sets TINY_VERSION to 2, and also changes the package name. The 4-component MSI versions do not support MSI major upgrades, only minor ones, i.e do not reinstall components, just update existing ones based on versioning rules. To support these rules, add DefaultVersion for the files that won't otherwise be versioned - headers, static and import libraries, pdbs, text - xml, python and perl scripts Also silence WiX warning that MSI won't store hashes for those files anymore.
-
Yuchen Pei authored
There are two array fields in spider_share with similar names: share->use_sql_dbton_ids that maps from i to the i-th dbton used by share. Thus it should be used only when i iterates over all distinct dbtons used by share. share->sql_dbton_ids that maps from i to the dbton used by the i-th link of the share. Thus it should be used only when i iterates over all links of a share. We correct instances where share->sql_dbton_ids should be used instead of share->use_sql_dbton_ids.
-
- 24 Jan, 2024 2 commits
-
-
Alexander Barkov authored
write_record() when performing REPLACE has an optimization: - if the unique violation happened in the last unique key, then do UPDATE - otherwise, do DELETE+INSERT This patch changes the way of detecting if this optimization can be applied if the table has long (hash based) unique (i.e. UNIQUE..USING HASH) constraints. Problem: The old condition did not take into account that TABLE_SHARE and TABLE see long uniques differently: - TABLE_SHARE sees as HA_KEY_ALG_LONG_HASH and HA_NOSAME - TABLE sees as usual non-unique indexes So the old condition could erroneously decide that the UPDATE optimization is possible when there are still some unique hash constraints in the table. Fix: - If the current key is a long unique, it now works as follows: UPDATE can be done if the current long unique is the last long unique, and there are no in-engine (normal) uniques. - For in-engine uniques nothing changes, it still works as before: If the current key is an in-engine (normal) unique: UPDATE can be done if it is the last normal unique.
-
Alexander Barkov authored
Turning REGEXP_REPLACE into two schema-qualified functions: - mariadb_schema.regexp_replace() - oracle_schema.regexp_replace() Fixing oracle_schema.regexp_replace(subj,pattern,replacement) to treat NULL in "replacement" as an empty string. Adding new classes implementing oracle_schema.regexp_replace(): - Item_func_regexp_replace_oracle - Create_func_regexp_replace_oracle Adding helper methods: - String *Item::val_str_null_to_empty(String *to) - String *Item::val_str_null_to_empty(String *to, bool null_to_empty) and reusing these methods in both Item_func_replace and Item_func_regexp_replace.
-
- 23 Jan, 2024 4 commits
-
-
Daniel Black authored
When CMAKE_OSX_ARCHITECTURES isn't set we end up with "Packaging as: mariadb-10.4.33-osx10.19-x86_64" on arm64 builders. Instead of implying 64bit is x86, use CMAKE_SYSTEM_PROCESSOR to form the filename.
-
Vicențiu Ciorbaru authored
More in depth check to cover all used readline functions.
-
Sergei Golubchik authored
use the original, not the truncated, field in the long unique prefix, that is, in the hash(left(field, length)) expression. because MyISAM CHECK/REPAIR in compute_vcols() moves table->field but not prefix fields from keyparts. Also, implement Field_string::cmp_prefix() for prefix comparison of CHAR columns to work.
-
Sergei Golubchik authored
no need to use it when both arguments have the same length
-