- 16 Jul, 2024 2 commits
-
-
Sergei Petrunia authored
- Fix view-protocol: long expressions in SELECT list should have "expr AS column_name". - Also, moved the test from subselect*test to suite/json/t/json_table.test.
-
Yuchen Pei authored
-
- 15 Jul, 2024 5 commits
-
-
Daniel Black authored
Address Sanitizer's know how to detect stack overrun, so there's no point in us doing it. As evidenced by perfschema tests where signficant test failures because this function failed under ASAN (MDEV-33210). Also, so since clang-16, we cannot assume much about how local variables are allocated on the stack (MDEV-31605). Disabling check idea thanks to Sanja.
-
Anel Husakovic authored
- Remove single/trivial call of function MYSQL_BIN_LOG::init() and remove function - Remove single jump to label end2 and use code instead - Remove label end2
-
Oleg Smirnov authored
MDEV-34490 get_copy() and build_clone() may return an instance of an ancestor class instead of a copy/clone The `Item` class methods `get_copy()`, `build_clone()`, and `clone_item()` face an issue where they may be defined in a descendant class (e.g., `Item_func`) but not in a further descendant (e.g., `Item_func_child`). This can lead to scenarios where `build_clone()`, when operating on an instance of `Item_func_child` with a pointer to the base class (`Item`), returns an instance of `Item_func` instead of `Item_func_child`. Since this limitation cannot be resolved at compile time, this commit introduces runtime type checks for the copy/clone operations. A debug assertion will now trigger in case of a type mismatch. `get_copy()`, `build_clone()`, and `clone_item()` are no more virtual, but virtual `do_get_copy()`, `do_build_clone()`, and `do_clone_item()` are added to the protected section of the class `Item`. Additionally, const qualifiers have been added to certain methods to enhance code reliability. Reviewer: Oleksandr Byelkin <sanja@mariadb.com>
-
Thirunarayanan Balathandayuthapani authored
MDEV-34474 InnoDB: Failing assertion: stat_n_leaf_pages > 0 in ha_innobase::estimate_rows_upper_bound - Fixing the compilation issue.
-
Yuchen Pei authored
This fixes a valgrind failure where the bulk_size is used before initialised in ha_spider::end_bulk_insert().
-
- 14 Jul, 2024 1 commit
-
-
Ian Gilfillan authored
-
- 13 Jul, 2024 1 commit
-
-
Julius Goryavsky authored
Fixed a sorting order condition that in its previous form could lead to the formation of an incorrect pattern for comparing strings.
-
- 12 Jul, 2024 3 commits
-
-
Thirunarayanan Balathandayuthapani authored
MDEV-34542 Assertion `lock_trx_has_sys_table_locks(trx) == __null' failed in void row_mysql_unfreeze_data_dictionary(trx_t*) - During XA PREPARE, InnoDB releases the non-exclusive locks. But it fails to remove the non-exclusive table lock from the transaction table locks. In the mean time, main thread evicts the table from the LRU cache. While rollbacking the XA transaction, InnoDB iterates through the table locks to check whether it holds lock on any system tables and wrongly assumes the evicted table as system table since the table id is 0 Fix: === During XA PREPARE, remove the table locks of the transaction while releasing the non-exclusive locks.
-
Thirunarayanan Balathandayuthapani authored
Problem: ======== - During shutdown, InnoDB tries to free the asynchronous I/O slots and hangs. The reason is that InnoDB disables asynchronous I/O before waiting for pending asynchronous I/O to finish. buf_load(): InnoDB aborts the buffer pool load due to user requested shutdown and doesn't wait for the asynchronous read to get completed. This could lead to debug assertion in buf_flush_buffer_pool() during shutdown Fix: === os_aio_free(): Should wait all read_slots and write_slots to finish before disabling the aio. buf_load(): Should wait for pending read request to complete even though it was aborted.
-
Daniel Black authored
Simplify in an attempt to avoid: mysqltest: At line 275: File already exist: on the write_file lines. Using write_line as that's what a lot of other tests do for writing small bits to a expect file. Review thanks Valdislav Vaintroub
-
- 11 Jul, 2024 3 commits
-
-
Oleg Smirnov authored
MDEV-34041 Display additional information for materialized subqueries in EXPLAIN/ANALYZE FORMAT=JSON This commits adds the "materialization" block to the output of EXPLAIN/ANALYZE FORMAT=JSON when materialized subqueries are involved into processing. In the case of ANALYZE additional runtime information is displayed, such as: - chosen strategy of materialization - number of partial match/index lookup loops - sizes of partial match buffers
-
Galina Shalygina authored
from HAVING The bug is caused by refixing of the constant subquery in pushdown from HAVING into WHERE optimization. Similarly to MDEV-29363 in the problematic query two references of the constant subquery are used. After the pushdown one of the references of the subquery is pushed into WHERE-clause and the second one remains as the part of the HAVING-clause. Before the represented fix, the constant subquery reference that was going to be pushed into WHERE was cleaned up and fixed. That caused the changes of the subquery itself and, therefore, changes for the second reference that remained in HAVING. These changes caused a crash. To fix this problem all constant objects that are going to be pushed into WHERE should be marked with an IMMUTABLE_FL flag. Objects marked with this flag are not cleaned up or fixed in the pushdown optimization. Approved by Igor Babaev <igor@mariadb.com>
-
Daniel Black authored
The version test on not_valgrind_build.inc was broken as in BB the sp-no-valgrind.test was executed. The implication that it wouldn't work on ASAN was also incorrect as ASAN tests show it running fine there. Correct sp-no-valgrind.test for not_valgrind.inc.
-
- 10 Jul, 2024 6 commits
-
-
Dave Gosselin authored
Improve performance of queries like SELECT * FROM t1 WHERE field = NAME_CONST('a', 4); by, in this example, replacing the WHERE clause with field = 4 in the case of ref access. The rewrite is done during fix_fields and we disambiguate this case from other cases of NAME_CONST by inspecting where we are in parsing. We rely on THD::where to accomplish this. To improve performance there, we change the type of THD::where to be an enumeration, so we can avoid string comparisons during Item_name_const::fix_fields. Consequently, this patch also changes all usages of THD::where to conform likewise.
-
Brandon Nesterenko authored
There are two problems. First, replication fails when XA transactions are used where the slave has replicate_do_db set and the client has touched a different database when running DML such as inserts. This is because XA commands are not treated as keywords, and are thereby not exempt from the replication filter. The effect of this is that during an XA transaction, if its logged “use db” from the master is filtered out by the replication filter, then XA END will be ignored, yet its corresponding XA PREPARE will be executed in an invalid state, thereby breaking replication. Second, if the slave replicates an XA transaction which results in an empty transaction, the XA START through XA PREPARE first phase of the transaction won’t be binlogged, yet the XA COMMIT will be binlogged. This will break replication in chain configurations. The first problem is fixed by treating XA commands in Query_log_event as keywords, thus allowing them to bypass the replication filter. Note that Query_log_event::is_trans_keyword() is changed to accept a new parameter to define its mode, to either check for XA commands or regular transaction commands, but not both. In addition, mysqlbinlog is adapted to use this mode so its --database filter does not remove XA commands from its output. The second problem fixed by overwriting the XA state in the XID cache to be XA_ROLLBACK_ONLY, so at commit time, the server knows to rollback the transaction and skip its binlogging. If the xid cache is cleared before an XA transaction receives its completion command (e.g. on server shutdown), then before reporting ER_XAER_NOTA when the completion command is executed, the filter is first checked if the database is ignored, and if so, the error is ignored. Reviewed By: ============ Kristian Nielsen <knielsen@knielsen-hq.org> Andrei Elkin <andrei.elkin@mariadb.com>
-
Vladislav Vaintroub authored
-
Thirunarayanan Balathandayuthapani authored
-
Vladislav Vaintroub authored
The server does not log errors after startup when it is started without the --console parameter and not as a service. This issue arises due to an undocumented behavior of FreeConsole() in Windows when only a single process (mariadbd/mysqld) is attached to it, causing the window to close. In this case stderr is redirected to a file before FreeConsole() is called. Procmon shows FreeConsole closing file handle subsequent writes to stderr fail with ERROR_INVALID_HANDLE because WriteFile() cannot operate on the closed handle. This results in losing all messages after startup, including warnings, errors, notes, and crash reports. Additionally, some users reported stderr being redirected to multi-master.info and failing at startup, but this could not be reproduced here. The workaround involves calling FreeConsole() right before the redirection of stdout/stderr. This fix has been tested with XAMPP and via cmd.exe using "start mysqld". Automated testing using MTR is challenging for this case. The fix is only applicable to version 10.5. In later versions, the FreeConsole() call has been removed.
-
Thirunarayanan Balathandayuthapani authored
MDEV-34474 InnoDB: Failing assertion: stat_n_leaf_pages > 0 in ha_innobase::estimate_rows_upper_bound - Column stat_value and sample_size in mysql.innodb_index_stats table is declared as BIGINT UNSIGNED without any check constraint. user manually updates the value of stat_value and sample_size to zero. InnoDB aborts the server while reading the statistics information because InnoDB expects at least one leaf page to exist for the index. - To fix this issue, InnoDB should interpret the value of stat_n_leaf_pages, stat_index_size in innodb_index_stats stat_clustered_index_size, stat_sum_of_other_index_sizes in innodb_table_stats as valid one even though user mentioned it as 0.
-
- 09 Jul, 2024 1 commit
-
-
Julius Goryavsky authored
-
- 08 Jul, 2024 11 commits
-
-
Julius Goryavsky authored
-
Julius Goryavsky authored
The test has been made more stable according to the recommendations of the Codership team.
-
sjaakola authored
The problem was in error message suppression, which did not match the actual warning messages, due to bad quotations. Changed warnings message suppressions to more simple format. Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
-
Denis Protivensky authored
DML transactions on FK-child tables also get table locks on FK-parent tables. If there is a DML transaction holding such a lock, and a TOI transaction starts, the latter BF-aborts the former and puts itself into a waiting state. If at this moment another DML transaction on FK-child table starts, it doesn't check that the transaction waiting on a parent table lock is TOI, and it erroneously BF-aborts the waiting TOI transaction. The fix: don't roll back high-priority transaction waiting on a lock in InnoDB, instead roll back an incoming DML transaction. Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
-
Rex authored
st_select_lex::update_correlated_cache() fails to take JSON_TABLE functions in subqueries into account. Reviewed by Sergei Petrunia (sergey@mariadb.com)
-
Brandon Nesterenko authored
The IO thread can report error code 2013 into the error log when it is stopped during the initial connection process to the primary, as well as when trying to read an event. However, because the IO thread is being stopped, its connection to the primary is force-killed by the signaling thread (see THD::awake_no_mutex()), and thereby these connection errors should be ignored. Reviewed By: ============ Kristian Nielsen <knielsen@knielsen-hq.org>
-
Alexander Barkov authored
-
Alexander Barkov authored
my_like_range*() can create longer keys than Field::char_length(). This caused warnings during print_range(). Fix: Suppressing warnings in print_range().
-
Anson Chung authored
Line numbers had to be removed from the ignorelists in order to be diffed against since locations of the same findings can differ across runs. Therefore preprocessing has to be done on the CI findings so that it can be compared to the ignorelist and new findings can be outputted. However, since line numbers have to be removed, a situation occurs where it is difficult to reference the location of findings in code given the output of the CI job. To lessen this pain, change the cppcheck template to include code snippets which make it easier to reference where in the code the finding is referring to, even in the absence of line numbers. Ignorelisting works as before since locations of the finding may change but not the code it is referring to. Furthermore, due to the innate difficulty in maintaining ignorelists across branches and triaging new findings, allow failure as to not have constantly failing pipelines as a result of a new findings that have not been addressed yet. Lastly, update SAST ignorelists to match the newly refactored cppcheck job and the current state of the codebase. All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services, Inc.
-
Anson Chung authored
Rectify cases of mismatched brackets and address possible cases of division by zero by checking if the denominator is zero before dividing. No functional changes were made. All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services, Inc.
-
Marko Mäkelä authored
crc32_avx512(): Explicitly cast ssize_t(size) to make it clear that we are indeed applying a negative offset to a pointer.
-
- 07 Jul, 2024 1 commit
-
-
Monty authored
The issue was that when repairing an Aria table of row format PAGE and the data file was bigger the 4G, the data file length was cut short because of wrong parameters to MY_ALIGN(). The effect was that ALTER TABLE, OPTIMIZE TABLE or REPAIR TABLE would fail on these tables, possibly corrupting them. The MDEV also exposed a bug where error state was not propagated properly to the upper level if the number of rows in the table changed.
-
- 06 Jul, 2024 1 commit
-
-
Brandon Nesterenko authored
The current semi-sync binlog fail-over recovery process uses rpl_semi_sync_slave_enabled==TRUE as its condition to truncate a primary server’s binlog, as it is anticipating the server to re-join a replication topology as a replica. However, for servers configured with both rpl_semi_sync_master_enabled=1 and rpl_semi_sync_slave_enabled=1, if a primary is just re-started (i.e. retaining its role as master), it can truncate its binlog to drop transactions which its replica(s) has already received and executed. If this happens, when the replica reconnects, its gtid_slave_pos can be ahead of the recovered primary’s gtid_binlog_pos, resulting in an error state where the replica’s state is ahead of the primary’s. This patch changes the condition for semi-sync recovery to truncate the binlog to instead use the configuration variable --init-rpl-role, when set to SLAVE. This allows for both rpl_semi_sync_master_enabled and rpl_semi_sync_slave_enabled to be set for a primary that is restarted, and no transactions will be lost, so long as --init-rpl-role is not set to SLAVE. Reviewed By: ============ Sergei Golubchik <serg@mariadb.com>
-
- 05 Jul, 2024 3 commits
-
-
Brandon Nesterenko authored
The special logic used by the memory storage engine to keep slaves in sync with the master on a restart can break replication. In particular, after a restart, the master writes DELETE statements in the binlog for each MEMORY-based table so the slave can empty its data. If the DELETE is not executable, e.g. due to invalid triggers, the slave will error and fail, whereas the master will never see the problem. Instead of DELETE statements, use TRUNCATE to keep slaves in-sync with the master, thereby bypassing triggers. Reviewed By: =========== Kristian Nielsen <knielsen@knielsen-hq.org> Andrei Elkin <andrei.elkin@mariadb.com>
-
Thirunarayanan Balathandayuthapani authored
During read only mode, InnoDB doesn't allow checkpoint to happen. So InnoDB should throw the warning when InnoDB tries to force the checkpoint when innodb_read_only = 1 or innodb_force_recovery = 6.
-
Hugo Wen authored
MariaDB supports a "wait-free concurrent allocator based on pinning addresses". In `lf_pinbox_real_free()` it tries to sort the pinned addresses for better performance to use binary search during "real free". `alloca()` was used to allocate stack memory and copy addresses. To prevent a stack overflow when allocating the stack memory the function checks if there's enough stack space. However, the available stack size was calculated inaccurately which eventually caused database crash due to stack overflow. The crash was seen on MariaDB 10.6.11 but the same code defect exists on all MariaDB versions. A similar issue happened previously and the fix in fc2c1e43 was to add a `ALLOCA_SAFETY_MARGIN` which is 8192 bytes. However, that safety margin is not enough during high connection workloads. MySQL also had a similar issue and the fix https://github.com/mysql/mysql-server/commit/b086fda was to remove the use of `alloca` and replace qsort approach by a linear scan through all pointers (pins) owned by each thread. This commit is mostly the same as it is the only way to solve this issue as: 1. Frame sizes in different architecture can be different. 2. Number of active (non-null) pinned addresses varies, so the frame size for the recursive sorting function `msort_with_tmp` is also hard to predict. 3. Allocating big memory blocks in stack doesn't seem to be a very good practice. For further details see the mentioned commit in MySQL and the inline comments. All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services, Inc.
-
- 04 Jul, 2024 2 commits
-
-
Sergei Petrunia authored
-
Sergei Petrunia authored
The symptoms were: take a server with no activity and a table that's not in the buffer pool. Run a query that reads the whole table and observe that r_engine_stats.pages_read_count shows about 2% of the table was read. Who reads the rest? The cause was that page prefetching done inside InnoDB was not counted. This counts page prefetch requests made in buf_read_ahead_random() and buf_read_ahead_linear() and makes them visible in: - ANALYZE: r_engine_stats.pages_prefetch_read_count - Slow Query Log: Pages_prefetched: This patch intentionally doesn't attempt to count the time to read the prefetched pages: * there's no obvious place where one can do it * prefetch reads may be done in parallel (right?), it is not clear how to count the time in this case.
-