- 14 Mar, 2024 4 commits
-
-
Sergei Petrunia authored
eliminate_item_equal() uses quick_fix_field() for Item objects it creates. It computes some of their attributes on its own (see update_used_tables() call) but it doesn't update not_null_tables_cache. Recompute not_null_tables_cache also. Not computing it is currently harmless, except for producing MSAN error when some other code propagates the wrong value of not_null_tables_cache to other item.
-
Sergei Golubchik authored
cmake bug #14362
-
Sergei Golubchik authored
-
Thirunarayanan Balathandayuthapani authored
- Suppress the "Difficult to find free blocks" warning globally to avoid many different test case failing. - Demote the error information in validate_first_page() to note. So first page can recovered from doublewrite buffer and can throw error in case the page wasn't found in doublewrite buffer.
-
- 13 Mar, 2024 4 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
when changing charset from latin1 to utf8, adjust max_length accordingly
-
Dmitry Shulga authored
Follow-up to fix comiler warings caused by present of the clause override in declaration of the method Item_param::cleanup
-
- 12 Mar, 2024 1 commit
-
-
Dmitry Shulga authored
UPDATE statement that is run in PS mode and uses positional parameter handles columns declared with the clause DEFAULT NULL incorrectly in case the clause DEFAULT is passed as actual value for the positional parameter of the prepared statement. Similar issue happens in case an expression specified in the DEFAULT clause of table's column definition. The reason for incorrect processing of columns declared as DEFAULT NULL is that setting of null flag for a field being updated was missed in implementation of the method Item_param::assign_default(). The reason for incorrect handling of an expression in DEFAULT clause is also missed saving of a field inside implementation of the method Item_param::assign_default().
-
- 11 Mar, 2024 5 commits
-
-
Marko Mäkelä authored
signal_hand(): Remove the cmake -DWITH_DBUG_TRACE=ON instrumentation. It can cause a crash on shutdown when the only other thread is waiting in wait_for_signal_thread_to_end().
-
mariadb-DebarunBanerjee authored
innodb.autoinc_debug: Correct the test case for predictable deadlock.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
In the JSON functions, the debug injection for stack overflows is inaccurate and may cause actual stack overflows. Let us simply inject stack overflow errors without actually relying on the ability of check_stack_overrun() to do so. Reviewed by: Rucha Deodhar
-
Marko Mäkelä authored
-
- 08 Mar, 2024 1 commit
-
-
Daniele Sciascia authored
Fix a scenario where `mariabackup --prepare` fails with assertion `!m_modifications || !recv_no_log_write' in `mtr_t::commit()`. This happens if the prepare step of the backup encounters a data directory which happens to store wsrep xid position in TRX SYS page (this is no longer the case since 10.3.5). And since MDEV-17458, `trx_rseg_array_init()` handles this case by copying the xid position to rollback segments, before clearing the xid from TRX SYS page. However, this step should be avoided when `trx_rseg_array_init()` is invoked from mariabackup. The relevant code was surrounded by the condition `srv_operation == SRV_OPERATION_NORMAL`. An additional check ensures that we are not trying to copy a xid position which has already zeroed.
-
- 07 Mar, 2024 1 commit
-
-
mariadb-DebarunBanerjee authored
The issue here is ha_innobase::get_auto_increment() could cause a deadlock involving auto-increment lock and rollback the transaction implicitly. For such cases, storage engines usually call thd_mark_transaction_to_rollback() to inform SQL engine about it which in turn takes appropriate actions and close the transaction. In innodb, we call it while converting Innodb error code to MySQL. However, since ::innobase_get_autoinc() returns void, we skip the call for error code conversion and also miss marking the transaction for rollback for deadlock error. We assert eventually while releasing a savepoint as the transaction state is not active. Since convert_error_code_to_mysql() is handling some generic error handling part, like invoking the callback when needed, we should call that function in ha_innobase::get_auto_increment() even if we don't return the resulting mysql error code back.
-
- 06 Mar, 2024 3 commits
-
-
Thirunarayanan Balathandayuthapani authored
Problem: ======== During upgrade, InnoDB does write the redo log for adjusting the tablespace size or tablespace flags even before the log has upgraded to configured format. This could lead to data inconsistent if any crash happened during upgrade process. Fix: === srv_start(): Write the tablespace flags adjustment, increased tablespace size redo log only after redo log upgradation. log_write_low(), log_reserve_and_write_fast(): Check whether the redo log is in physical format.
-
Thirunarayanan Balathandayuthapani authored
- During update operation, InnoDB should avoid the initializing the FTS_DOC_ID of foreign table if the foreign table is discarded
-
Thirunarayanan Balathandayuthapani authored
- Adjust the test case to check whether all tablespaces are encrypted by comparing it with existing table count.
-
- 04 Mar, 2024 1 commit
-
-
Yuchen Pei authored
-
- 03 Mar, 2024 1 commit
-
-
Yuchen Pei authored
Like the fix for MDEV-32753 and MDEV-33242, spider init queries creates new connections that use the global sql_mode of the existing connection.
-
- 02 Mar, 2024 1 commit
-
-
Alexey Botchkov authored
Few Item_func_json_xxx::fix_length_and_dec() functions fixed.
-
- 01 Mar, 2024 2 commits
-
-
Monty authored
The problem was that SHOW PROCESSLIST was done before the command of the default connection was cleared. Reviewer: Sergei Golubchik <serg@mariadb.org>
-
Tony Chen authored
Previously, the behavior was to error out on the first invalid option encountered. With this change, a best effort approach is made so that all invalid options processed will be printed before exiting. There is a caveat. The options are processed many times at varying stages of server startup because the server is not aware of all valid options immediately (e.g. plugins have to be loaded first before the server knows what are the available plugin options). So, there are some options that the server can determine are invalid "early" on, and there are some options that the server cannot determine are invalid until "later" on. For example, the server can determine an option such as `--a` is an ambiguous option very early on but an option such as `--this-does-not-match-any-option` cannot be labelled as invalid until the server is aware of all available options. Thus, it is possible that the server will still fail before printing out all "invalid" options. You can see this by passing `--a --obvious-invalid-option`. Test cases were added to `mysqld_option_err.test` to validate that multiple invalid options will be displayed in the error message. All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services.
-
- 27 Feb, 2024 4 commits
-
-
mariadb-DebarunBanerjee authored
The root cause is the WAL logging of file operation when the actual operation fails afterwards. It creates a situation with a log entry for a operation that would always fail. I could simulate both the backup scenario error and Innodb recovery failure exploiting the weakness. We are following WAL for file rename operation and once logged the operation must eventually complete successfully, or it is a major catastrophe. Right now, we fail for rename and handle it as normal error and it is the problem. I created a patch to address RENAME operation to a non existing schema where the destination schema directory is missing. The patch checks for the missing schema before logging in an attempt to avoid the failure after WAL log is written/flushed. I also checked that the schema cannot be dropped or there cannot be any race with other rename to the same file. This is protected by the MDL lock in SQL today. The patch should this be a good improvement over the current situation and solves the issue at hand.
-
Julius Goryavsky authored
Correction to ensure compatibility with the updated wsrep-lib library.
-
Julius Goryavsky authored
-
Thirunarayanan Balathandayuthapani authored
Problem: ======== - InnoDB reads the length of the variable length field wrongly while applying the modification log of instant table. Solution: ======== rec_init_offsets_comp_ordinary(): For the temporary instant file record, InnoDB should read the length of the variable length field from the record itself.
-
- 26 Feb, 2024 2 commits
-
-
Igor Babaev authored
If a query with GROUP_CONCAT is executed then the server reports a warning every time when the length of the result of this function exceeds the set value of the system variable group_concat_max_len. This bug led to the set of warnings from the second execution of the prepared statement that did not coincide with the one from the first execution if the executed query was a grouping query over a join of tables using GROUP_CONCAT function and join cache was not allowed to be employed. The descrepancy of the sets of warnings was due to lack of cleanup for Item_func_group_concat::row_count after execution of the query. Approved by Oleksandr Byelkin <sanja@mariadb.com>
-
Alexander Barkov authored
When sending the server default collation ID to the client in the handshake packet, translate a 2-byte collation ID to the ID of the default collation for the character set.
-
- 23 Feb, 2024 1 commit
-
-
Alexander Barkov authored
Functions extracting non-negative datetime components: - YEAR(dt), EXTRACT(YEAR FROM dt) - QUARTER(td), EXTRACT(QUARTER FROM dt) - MONTH(dt), EXTRACT(MONTH FROM dt) - WEEK(dt), EXTRACT(WEEK FROM dt) - HOUR(dt), - MINUTE(dt), - SECOND(dt), - MICROSECOND(dt), - DAYOFYEAR(dt) - EXTRACT(YEAR_MONTH FROM dt) did not set their max_length properly, so in the DECIMAL context they created a too small DECIMAL column, which led to the 'Out of range value' error. The problem is that most of these functions historically returned the signed INT data type. There were two simple ways to fix these functions: 1. Add +1 to max_length. But this would also change their size in the string context and create too long VARCHAR columns, with +1 excessive size. 2. Preserve max_length, but change the data type from INT to INT UNSIGNED. But this would break backward compatibility. Also, using UNSIGNED is generally not desirable, it's better to stay with signed when possible. This fix implements another solution, which it makes all these functions work well in all contexts: int, decimal, string. Fix details: - Adding a new special class Type_handler_long_ge0 - the data type handler for expressions which: * should look like normal signed INT * but which known not to return negative values Expressions handled by Type_handler_long_ge0 store in Item::max_length only the number of digits, without adding +1 for the sign. - Fixing Item_extract to use Type_handler_long_ge0 for non-negative datetime components: YEAR, YEAR_MONTH, QUARTER, MONTH, WEEK - Adding a new abstract class Item_long_ge0_func, for functions returning non-negative datetime components. Item_long_ge0_func uses Type_handler_long_ge0 as the type handler. The class hierarchy now looks as follows: Item_long_ge0_func Item_long_func_date_field Item_func_to_days Item_func_dayofmonth Item_func_dayofyear Item_func_quarter Item_func_year Item_long_func_time_field Item_func_hour Item_func_minute Item_func_second Item_func_microsecond - Cleanup: EXTRACT(QUARTER FROM dt) created an excessive VARCHAR column in string context. Changing its length from 2 to 1.
-
- 21 Feb, 2024 2 commits
-
-
Igor Babaev authored
As a result of this bug the second execution of the prepared statement created for select from materialized view could return a wrong result set if - the specification of the view used a left join - an inner table the left join was a mergeable derived table - the derived table contained a constant column. The problem appeared because the flag 'maybe-null' of the wrapper Item_direct_view_ref constructed for the constant field of the mergeable derived table was not set to 'true' on the second execution of the prepared statement. The patch always sets this flag properly when calling the function Item_direct_view_ref::set_null_ref-table(). The latter is invoked in Item_direct_view_ref constructor if it is created for some reference of a constant column belonging to a mergeable derived table. Approved by Oleksandr Byelkin <sanja@mariadb.com>
-
Yuchen Pei authored
-
- 20 Feb, 2024 5 commits
-
-
Brandon Nesterenko authored
In an addition to test rpl.rpl_parallel_sbm added by MDEV-32265, the test uses sleep statements alone to test Seconds_Behind_Master with delayed replication. On slow running machines, the test can pass the intended MASTER_DELAY duration and Seconds_Behind_Master can become 0, when the test expects the transaction to still be actively in a delaying state. This can be consistently reproduced by adding a sleep statement before the call to --let = query_get_value(SHOW SLAVE STATUS, Seconds_Behind_Master, 1) to sleep past the delay end point. This patch fixes this by locking the table which the delayed transaction targets so Second_Behind_Master cannot be updated before the test reads it for validation.
-
Oleksandr Byelkin authored
-
Monty authored
-
Monty authored
This removes the error: "Failed to load slave replication state from table mysql.gtid_slave_pos: 1017: Can't find file: './mysql/' (errno: 2 "No such file or directory")
-
Andrew Daugherity authored
Should be h2 rather than h1, and GitHub requires an intervening space.
-
- 19 Feb, 2024 1 commit
-
-
Yuchen Pei authored
similar to MDEV-30981
-
- 18 Feb, 2024 1 commit
-
-
Vladislav Vaintroub authored
- Use "new" math library WOLFSSL_SP_MATH_ALL, which is now promoted by WolfSSL for faster performance. "fastmath" we used previously is going to be deprecated, it was not really always fast. - Optimize common RSA math operations with WOLFSSL_HAVE_SP_RSA - Incorporate assembly optimizations, currently for Intel x64 only This patch significantly reduces execution time for SSL tests like main.ssl-big and main.ssl_connect, which now run 2 to 3 times faster. Notably, when this patch is applied to 11.4, server startup in with ephemeral certificates becomes approximately 10x faster due to optimized wolfSSL_EVP_PKEY_keygen(). Additionally, refactored WolfSSL by removing old workarounds and consolidating wolfssl and wolfcrypt into a single library wolfssl, just like it was done in WolfSSL's own CMake.
-