- 08 Aug, 2024 1 commit
-
-
Daniel Bartholomew authored
-
- 24 Jul, 2024 1 commit
-
-
Dmitry Shulga authored
SP instructions, consisting a body of a stored routine, had the same memory root as an instance of the class sp_head, representing abstraction for stored routine itself. It resulted in memory leaks on re-parsing a failed statement of a stored routine in case the statement re-compilation has to be performed by the reason of changes in metadata of tables, triggers, etc. the stored routine depends on. To fix this kind of memory leaks, every SP instruction requiring access to a LEX object must do re-parsing of a failed statement on its own memory root. These memory roots are allocated on sp_head's memory root and every instance of the sp_lex_instr class has a pointer to allocated memory root in case re-parsing of the correspondiong SP instruction was requested. On every subsequent re-parsing of the failed statement, a memory allocated on SP instruction's memory root is released and the memory root re-initialized. Following memory allocations taken place on re-parsing the SP instruction's statement is performed on the dedicated memory root. So, no memory leaks will happen on SP statement re-parsing.
-
- 15 Jul, 2024 1 commit
-
-
Robin Newhouse authored
Fedora 40 introdced wget2 as the default wget program, which caused a break in the functionality of the test_upgrade.sh script. Modified the archive.mariadb.org check so that it uses a one-line `curl` check to identify the correct repository URL. Additionally added rpm sources for the boost-program-options and openssl 1.1 and 1.0.2. This is necessary when building older versions of MariaDB (e.g., 10.4) on newer linux distributions (e.g., Fedora 39) that do not always have access to all required dependencies. In the above example older MariaDB versions are not compatible with OpenSSL 3.0+, so require an older version. All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services.
-
- 09 Jul, 2024 2 commits
-
-
Alexander Barkov authored
-
Alexander Barkov authored
-
- 08 Jul, 2024 7 commits
-
-
Oleksandr Byelkin authored
-
Alexander Barkov authored
-
Alexander Barkov authored
-
Alexander Barkov authored
my_like_range*() can create longer keys than Field::char_length(). This caused warnings during print_range(). Fix: Suppressing warnings in print_range().
-
Anson Chung authored
Line numbers had to be removed from the ignorelists in order to be diffed against since locations of the same findings can differ across runs. Therefore preprocessing has to be done on the CI findings so that it can be compared to the ignorelist and new findings can be outputted. However, since line numbers have to be removed, a situation occurs where it is difficult to reference the location of findings in code given the output of the CI job. To lessen this pain, change the cppcheck template to include code snippets which make it easier to reference where in the code the finding is referring to, even in the absence of line numbers. Ignorelisting works as before since locations of the finding may change but not the code it is referring to. Furthermore, due to the innate difficulty in maintaining ignorelists across branches and triaging new findings, allow failure as to not have constantly failing pipelines as a result of a new findings that have not been addressed yet. Lastly, update SAST ignorelists to match the newly refactored cppcheck job and the current state of the codebase. All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services, Inc.
-
Anson Chung authored
Rectify cases of mismatched brackets and address possible cases of division by zero by checking if the denominator is zero before dividing. No functional changes were made. All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services, Inc.
-
Marko Mäkelä authored
crc32_avx512(): Explicitly cast ssize_t(size) to make it clear that we are indeed applying a negative offset to a pointer.
-
- 07 Jul, 2024 1 commit
-
-
Monty authored
The issue was that when repairing an Aria table of row format PAGE and the data file was bigger the 4G, the data file length was cut short because of wrong parameters to MY_ALIGN(). The effect was that ALTER TABLE, OPTIMIZE TABLE or REPAIR TABLE would fail on these tables, possibly corrupting them. The MDEV also exposed a bug where error state was not propagated properly to the upper level if the number of rows in the table changed.
-
- 06 Jul, 2024 1 commit
-
-
Brandon Nesterenko authored
The current semi-sync binlog fail-over recovery process uses rpl_semi_sync_slave_enabled==TRUE as its condition to truncate a primary server’s binlog, as it is anticipating the server to re-join a replication topology as a replica. However, for servers configured with both rpl_semi_sync_master_enabled=1 and rpl_semi_sync_slave_enabled=1, if a primary is just re-started (i.e. retaining its role as master), it can truncate its binlog to drop transactions which its replica(s) has already received and executed. If this happens, when the replica reconnects, its gtid_slave_pos can be ahead of the recovered primary’s gtid_binlog_pos, resulting in an error state where the replica’s state is ahead of the primary’s. This patch changes the condition for semi-sync recovery to truncate the binlog to instead use the configuration variable --init-rpl-role, when set to SLAVE. This allows for both rpl_semi_sync_master_enabled and rpl_semi_sync_slave_enabled to be set for a primary that is restarted, and no transactions will be lost, so long as --init-rpl-role is not set to SLAVE. Reviewed By: ============ Sergei Golubchik <serg@mariadb.com>
-
- 05 Jul, 2024 3 commits
-
-
Brandon Nesterenko authored
The special logic used by the memory storage engine to keep slaves in sync with the master on a restart can break replication. In particular, after a restart, the master writes DELETE statements in the binlog for each MEMORY-based table so the slave can empty its data. If the DELETE is not executable, e.g. due to invalid triggers, the slave will error and fail, whereas the master will never see the problem. Instead of DELETE statements, use TRUNCATE to keep slaves in-sync with the master, thereby bypassing triggers. Reviewed By: =========== Kristian Nielsen <knielsen@knielsen-hq.org> Andrei Elkin <andrei.elkin@mariadb.com>
-
Thirunarayanan Balathandayuthapani authored
During read only mode, InnoDB doesn't allow checkpoint to happen. So InnoDB should throw the warning when InnoDB tries to force the checkpoint when innodb_read_only = 1 or innodb_force_recovery = 6.
-
Hugo Wen authored
MariaDB supports a "wait-free concurrent allocator based on pinning addresses". In `lf_pinbox_real_free()` it tries to sort the pinned addresses for better performance to use binary search during "real free". `alloca()` was used to allocate stack memory and copy addresses. To prevent a stack overflow when allocating the stack memory the function checks if there's enough stack space. However, the available stack size was calculated inaccurately which eventually caused database crash due to stack overflow. The crash was seen on MariaDB 10.6.11 but the same code defect exists on all MariaDB versions. A similar issue happened previously and the fix in fc2c1e43 was to add a `ALLOCA_SAFETY_MARGIN` which is 8192 bytes. However, that safety margin is not enough during high connection workloads. MySQL also had a similar issue and the fix https://github.com/mysql/mysql-server/commit/b086fda was to remove the use of `alloca` and replace qsort approach by a linear scan through all pointers (pins) owned by each thread. This commit is mostly the same as it is the only way to solve this issue as: 1. Frame sizes in different architecture can be different. 2. Number of active (non-null) pinned addresses varies, so the frame size for the recursive sorting function `msort_with_tmp` is also hard to predict. 3. Allocating big memory blocks in stack doesn't seem to be a very good practice. For further details see the mentioned commit in MySQL and the inline comments. All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services, Inc.
-
- 04 Jul, 2024 6 commits
-
-
Sergei Petrunia authored
-
Sergei Petrunia authored
The symptoms were: take a server with no activity and a table that's not in the buffer pool. Run a query that reads the whole table and observe that r_engine_stats.pages_read_count shows about 2% of the table was read. Who reads the rest? The cause was that page prefetching done inside InnoDB was not counted. This counts page prefetch requests made in buf_read_ahead_random() and buf_read_ahead_linear() and makes them visible in: - ANALYZE: r_engine_stats.pages_prefetch_read_count - Slow Query Log: Pages_prefetched: This patch intentionally doesn't attempt to count the time to read the prefetched pages: * there's no obvious place where one can do it * prefetch reads may be done in parallel (right?), it is not clear how to count the time in this case.
-
Galina Shalygina authored
The crash is caused by the attempt to refix the constant subquery during pushdown from HAVING into WHERE optimization. Every condition that is going to be pushed into WHERE clause is first cleaned up, then refixed. Constant subqueries are not cleaned or refixed because they will remain the same after refixing, so this complicated procedure can be omitted for them (introduced in MDEV-21184). Constant subqueries are marked with flag IMMUTABLE_FL, that helps to miss the cleanup stage for them. Also they are marked as fixed, so refixing is also not done for them. Because of the multiple equality propagation several references to the same constant subquery can exist in the condition that is going to be pushed into WHERE. Before this patch, the problem appeared in the following way. After the first reference to the constant subquery is processed, the flag IMMUTABLE_FL for the constant subquery is disabled. So, when the second reference to this constant subquery is processed, the flag is already disabled and the subquery goes through the procedure of cleaning and refixing. That causes a crash. To solve this problem, IMMUTABLE_FL should be disabled only after all references to the constant subquery are processed, so after the whole condition that is going to be pushed is cleaned up and refixed. Approved by Igor Babaev <igor@maridb.com>
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
Alexander Barkov authored
Fixing applying the COLLATE clause to a parameter caused an error error: COLLATION '...' is not valid for CHARACTER SET 'binary' Fix: - Changing the collation derivation for a non-prepared Item_param to DERIVATION_IGNORABLE. - Allowing to apply any COLLATE clause to expressions with DERIVATION_IGNORABLE. This includes: 1. A non-prepared Item_param 2. An explicit NULL 3. Expressions derived from #1 and #2 For example: SELECT ? COLLATE utf8mb_unicode_ci; SELECT NULL COLLATE utf8mb_unicode_ci; SELECT CONCAT(?) COLLATE utf8mb_unicode_ci; SELECT CONCAT(NULL) COLLATE utf8mb_unicode_ci - Additional change: preserving the collation of an expression when the expression gets assigned to a PS parameter and evaluates to SQL NULL. Before this change, the collation of the parameter was erroneously set to &my_charset_binary. - Additional change: removing the multiplication to mbmaxlen from the fix_char_length_ulonglong() argument, because the multiplication already happens inside fix_char_length_ulonglong(). This fixes a too large column size created for a COLLATE clause.
-
- 03 Jul, 2024 6 commits
-
-
Brandon Nesterenko authored
In 10.0 there was an assert to ensure that there were semi sync clients before removing one, but it was removed in 10.1. This patch adds the assertion back.
-
mariadb-DebarunBanerjee authored
The performance regression seen while loading BP is caused by the deadlock fix given in MDEV-33543. The area of impact is wider but is more visible when BP is being loaded initially via DMLs. Specifically the response time could be impacted in DML doing pessimistic operation on index(split/merge) and the leaf pages are not found in buffer pool. It is more likely to occur with small BP size. The origin of the issue dates back to MDEV-30400 that introduced btr_cur_t::search_leaf() replacing btr_cur_search_to_nth_level() for leaf page searches. In btr_latch_prev, we use RW_NO_LATCH to get the previous page fixed in BP without latching. When the page is not in BP, we try to acquire and wait for S latch violating the latching order. This deadlock was analyzed in MDEV-33543 and fixed by using the already present wait logic in buf_page_get_gen() instead of waiting for latch. The wait logic is inferior to usual S latch wait and is simply a repeated sleep 100 of micro-sec (The actual sleep time could be more depending on platforms). The bug was seen with "change-buffering" code path and the idea was that this path should be less exercised. The judgement was not correct and the path is actually quite frequent and does impact performance when pages are not in BP and being loaded by DML expanding/shrinking large data. Fix: While trying to get a page with RW_NO_LATCH and we are attempting "out of order" latch, return from buf_page_get_gen immediately instead of waiting and follow the ordered latching path.
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
Monty authored
If compiled for debugging, LOCK_DURATION is also filled in.
-
Daniel Black authored
Valgrind looks as the assertions as examining uninitalized values. As the assertions are tested in other Debug builds we know it isn't all invalid. Account for Valgrind by removing the assertion under the WITH_VALGRIND=1 compulation.
-
- 02 Jul, 2024 6 commits
-
-
Dmitry Shulga authored
The memory leak happened on second execution of a prepared statement that runs UPDATE statement with correlated subquery in right hand side of the SET clause. In this case, invocation of the method table->stat_records() could return the zero value that results in going into the 'if' branch that handles impossible where condition. The issue is that this condition branch missed saving of leaf tables that has to be performed as first condition optimization activity. Later the PS statement memory root is marked as read only on finishing first time execution of the prepared statement. Next time the same statement is executed it hits the assertion on attempt to allocate a memory on the PS memory root marked as read only. This memory allocation takes place by the sequence of the following invocations: Prepared_statement::execute mysql_execute_command Sql_cmd_dml::execute Sql_cmd_update::execute_inner Sql_cmd_update::update_single_table st_select_lex::save_leaf_tables List<TABLE_LIST>::push_back To fix the issue, add the flag SELECT_LEX::leaf_tables_saved to control whether the method SELECT_LEX::save_leaf_tables() has to be called or it has been already invoked and no more invocation required. Similar issue could take place on running the DELETE statement with the LIMIT clause in PS/SP mode. The reason of memory leak is the same as for UPDATE case and be fixed in the same way.
-
Nikita Malyavin authored
Caused by: 5d37cac7 MDEV-33348 ALTER TABLE lock waiting stages are indistinguishable. In that commit, progress reporting was moved to mysql_alter_table from copy_data_between_tables. The temporary table case wasn't taken into the consideration, where the execution of mysql_alter_table ends earlier than usual, by the 'end_temporary' label. There, thd_progress_end has been missing. Fix: Add missing thd_progress_end() call in mysql_alter_table.
-
Monty authored
The feedback plugin server_uid variable and the calculate_server_uid() function is moved from feedback/utils.cc to sql/mysqld.cc server_uid is added as a global variable (shown in 'show variables') and is written to the error log on server startup together with server version and server commit id.
-
Monty authored
We have an issue if a user have the following in a configuration file: log_slow_filter="" # Log everything to slow query log log_queries_not_using_indexes=ON This set log_slow_filter to 'not_using_index' which disables slow_query_logging of most queries. In effect, on should never use log_slow_filter="" in config files but instead use log_slow_filter=ALL. Fixed by changing log_slow_filter="" that comes either from a configuration file or from the command line, when starting to the server, to log_slow_filter=ALL. A warning will be printed when this happens. Other things: - One can now use =ALL for any 'set' variable to set all options at once. (backported from 10.6)
-
Daniel Black authored
When getaddrinfo returns and error, the contents of ai are invalid so we cannot continue based on their data structures. In the previous branch of the if statement, we abort there if there is an error so for consistency we abort here too. The test case fixes the port number to UINTMAX32 for both an enumberated bind-address and the default bind-address covering the two calls to getaddrinfo. Review thanks Sanja.
-
Lena Startseva authored
Fix for v. 10.5
-
- 01 Jul, 2024 4 commits
-
-
Alexander Barkov authored
Item_func_hex::fix_length_and_dec() evaluated a too short data type for signed numeric arguments, which resulted in a 'Data too long for column' error on CREATE..SELECT. Fixing the code to take into account that a short negative numer can produce a long HEX value: -1 -> 'FFFFFFFFFFFFFFFF' Also fixing Item_func_hex::val_str_ascii_from_val_real(). Without this change, MTR test with HEX with negative float point arguments failed on some platforms (aarch64, ppc64le, s390-x).
-
Denis Protivensky authored
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
-
Denis Protivensky authored
InnoDB transactions may be reused after committed: - when taken from the transaction pool - during a DDL operation execution In this case wsrep flag on trx object is cleared, which may cause wrong execution logic afterwards (wsrep-related hooks are not run). Make trx->wsrep flag initialize from THD object only once on InnoDB transaction start and don't change it throughout the transaction's lifetime. The flag is reset at commit time as before. Unconditionally set wsrep=OFF for THD objects that represent InnoDB background threads. Make Wsrep_schema::store_view() operate in its own transaction. Fix streaming replication transactions' fragments rollback to not switch THD->wsrep value during transaction's execution (use THD->wsrep_ignore_table as a workaround). Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
-
Daniel Black authored
extra_port and port are 16 bit numbers and not 32 bit as they are tcp ports. Restrict their value.
-
- 29 Jun, 2024 1 commit
-
-
Daniel Black authored
Noticed thanks to Razvan Liviu Varzaru
-