- 23 Jan, 2024 8 commits
-
-
Monty authored
The old code collected a list of THD's, locked the THD's from getting deleted by locking two mutex and then later in a separate loop sent a kill signal to each THD. The problem with this approach is that, as THD's can be reused, the second time the THD is killed, the mutex can be taken in different order, which signals failures in safe_mutex. Fixed by sending the kill signal directly and not collect the THD's in a list to be signaled later. This is the same approach we are using in kill_zombie_dump_threads(). Other things: - Reset safe_mutex_t->locked_mutex when freed (Safety fix)
-
Monty authored
This was needed to get semisync to work on Windows.
-
Michael Widenius authored
rpl_semi_sync_slave_enabled_consistent.test and the first part of the commit message comes from Brandon Nesterenko. A test to show how to induce the "Read semi-sync reply magic number error" message on a primary. In short, if semi-sync is turned on during the hand-shake process between a primary and replica, but later a user negates the rpl_semi_sync_slave_enabled variable while the replica's IO thread is running; if the io thread exits, the replica can skip a necessary call to kill_connection() in repl_semisync_slave.slave_stop() due to its reliance on a global variable. Then, the replica will send a COM_QUIT packet to the primary on an active semi-sync connection, causing the magic number error. The test in this patch exits the IO thread by forcing an error; though note a call to STOP SLAVE could also do this, but it ends up needing more synchronization. That is, the STOP SLAVE command also tries to kill the VIO of the replica, which makes a race with the IO thread to try and send the COM_QUIT before this happens (which would need more debug_sync to get around). See THD::awake_no_mutex for details as to the killing of the replica’s vio. Notes: - The MariaDB documentation does not make it clear that when one enables semi-sync replication it does not matter if one enables it first in the master or slave. Any order works. Changes done: - The rpl_semi_sync_slave_enabled variable is now a default value for when semisync is started. The variable does not anymore affect semisync if it is already running. This fixes the original reported bug. Internally we now use repl_semisync_slave.get_slave_enabled() instead of rpl_semi_sync_slave_enabled. To check if semisync is active on should check the @@rpl_semi_sync_slave_status variable (as before). - The semisync protocol conflicts in the way that the original MySQL/MariaDB client-server protocol was designed (client-server send and reply packets are strictly ordered and includes a packet number to allow one to check if a packet is lost). When using semi-sync the master and slave can send packets at 'any time', so packet numbering does not work. The 'solution' has been that each communication starts with packet number 1, but in some cases there is still a chance that the packet number check can fail. Fixed by adding a flag (pkt_nr_can_be_reset) in the NET struct that one can use to signal that packet number checking should not be done. This is flag is set when semi-sync is used. - Added Master_info::semi_sync_reply_enabled to allow one to configure some slaves with semisync and other other slaves without semisync. Removed global variable semi_sync_need_reply that would not work with multi-master. - Repl_semi_sync_master::report_reply_packet() can now recognize the COM_QUIT packet from semisync slave and not give a "Read semi-sync reply magic number error" error for this case. The slave will be removed from the Ack listener. - On Windows, don't stop semisync Ack listener just because one slave connection is using socket_id > FD_SETSIZE. - Removed busy loop in Ack_receiver::run() by using "Self-pipe trick" to signal new slave and stop Ack_receiver. - Changed some Repl_semi_sync_slave functions that always returns 0 from int to void. - Added Repl_semi_sync_slave::slave_reconnect(). - Removed dummy_function Repl_semi_sync_slave::reset_slave(). - Removed some duplicate semisync notes from the error log. - Add test of "if (get_slave_enabled() && semi_sync_need_reply)" before calling Repl_semi_sync_slave::slave_reply(). (Speeds up the code as we can skip all initializations). - If epl_semisync_slave.slave_reply() fails, we disable semisync for that connection. - We do not call semisync.switch_off() if there are no active slaves. Instead we check in Repl_semi_sync_master::commit_trx() if there are no active threads. This simplices the code. - Changed assert() to DBUG_ASSERT() to ensure that the DBUG log is flushed in case of asserts. - Removed the internal rpl_semi_sync_slave_status as it is not needed anymore. The @@rpl_semi_sync_slave_status status variable is now mapped to rpl_semi_sync_enabled. - Removed rpl_semi_sync_slave_enabled as it is not needed anymore. Repl_semi_sync_slave::get_slave_enabled() contains the active status. - Added checking that we do not add a slave twice with Ack_receiver::add_slave(). This could happen with old code. - Removed Repl_semi_sync_master::check_and_switch() as it is not needed anymore. - Ensure that when we call Ack_receiver::remove_slave() that the slave is removed from the listener before function returns. - Call listener.listen_on_sockets() outside of mutex for better performance and less contested mutex. - Ensure that listening is ignoring newly added slaves when checking for responses. - Fixed the master ack_receiver listener is not killed if there are no connected slaves (and thus stop semisync handling of future connections). This could happen if all slaves sockets where would be marked as unreliable. - Added unlink() to base_ilist_iterator and remove() to I_List_iterator. This enables us to remove 'dead' slaves in Ack_recever::run(). - kill_zombie_dump_threads() now does killing of dump threads properly. - It can now kill several threads (should be impossible but could happen if IO slaves reconnects very fast). - We now wait until the dump thread is done before starting the dump. - Added an error if kill_zombie_dump_threads() fails. - Set thd->variables.server_id before calling kill_zombie_dump_threads(). This simplies the code. - Added a lot of comments both in code and tests. - Removed DBUG_EVALUATE_IF "failed_slave_start" as it is not used. Test changes: - rpl.rpl_session_var2 added which runs rpl.rpl_session_var test with semisync enabled. - Some timings changed slight with startup of slave which caused rpl_binlog_dump_slave_gtid_state_info.text to fail as it checked the error log file before the slave had started properly. Fixed by adding wait_for_pattern_in_file.inc that allows waiting for the pattern to appear in the log file. - Tests have been updated so that we first set rpl_semi_sync_master_enabled on the master and then set rpl_semi_sync_slave_enabled on the slaves (this is according to how the MariaDB documentation document how to setup semi-sync). - Error text "Master server does not have semi-sync enabled" has been replaced with "Master server does not support semi-sync" for the case when the master supports semi-sync but semi-sync is not enabled. Other things: - Some trivial cleanups in Repl_semi_sync_master::update_sync_header(). - We should in 11.3 changed the default value for rpl-semi-sync-master-wait-no-slave from TRUE to FALSE as the TRUE does not make much sense as default. The main difference with using FALSE is that we do not wait for semisync Ack if there are no slave threads. In the case of TRUE we wait once, which did not bring any notable benefits except slower startup of master configured for using semisync. Co-author: Brandon Nesterenko <brandon.nesterenko@mariadb.com> This solves the problem reported in MDEV-32960 where a new slave may not be registered in time and the master disables semi sync because of that.
-
Rucha Deodhar authored
client is not using any database to execute the SQL. Analysis: When there is no database, the database string is NULL so (null) gets printed. Fix: Print NULL instead of (null) because when there is no database SELECT DATABASE() return NULL. SO NULL is more appropriate choice.
-
Rucha Deodhar authored
to SQL error plugin New plugin variable "with_db_and_thread_info" is added which prints the thread id and databse name to the logfile. the value is stored in variable "with_db_and_thread_info" log_sql_errors() is responsible for printing in the log. If detailed is enabled, print thread id and database name both, otherwise skip it.
-
Daniel Black authored
Notably MDEV-33290, columnstore disable stays in 10.5.
-
Daniel Black authored
MCOL-5611 supporting with Boost-1.80, the version "next_prime" disappears from https://github.com/boostorg/unordered/blob/boost-1.79.0/include/boost/unordered/detail/implementation.hpp makes it the currenly highest supported versions. Lets check this version. While CMake-3.19+ supports version ranges in package determinations this isn't supported for Boost in Cmake-3.28. So we check for the 1.80 and don't compile ColumnStore.
-
Dmitry Shulga authored
-
- 22 Jan, 2024 7 commits
-
-
Rex authored
Statements affect by this bug are all SQL statements that 1) prefixed with "EXPLAIN" 2) have a lower level join structure created for a union subquery. A bug in select_describe() passed an incorrect "result" object to mysql_explain_union(), resulting in unpredictable behaviour and out of context calls. Reviewed by: Oleksandr Byelkin, sanja@mariadb.com
-
Oleksandr Byelkin authored
-
Brandon Nesterenko authored
An existing binlog checksum can be overridden to 0 if writing a NULL payload when using Zlib for the computation. That is, calling into Zlib's crc32 with empty data initializes an incremental CRC computation to 0. This patch changes the Log_event_writer::write_data() to exit immediately if there is nothing to write, thereby bypassing the checksum computation. This follows the pattern of Log_event_writer::encrypt_and_write(), which also exits immediately if there is no data to write. Reviewed By: ============ Andrei Elkin <andrei.elkin@mariadb.com>
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Daniel Black authored
A mix of path separators looks odd. InnoDB: Loading buffer pool(s) from C:\xampp\mysql\data/ib_buffer_pool This was changed in cf552f58 Both forward slashes and backward slashes work on Windows. We do not use \\?\ names. So we improve the consistent look of it so it doesn't look like a bug. Normalize, in this case, the path separator to \ for making the filename. Reported thanks to Github user @celestinoxp. Closes: https://github.com/ApacheFriends/xampp-build/issues/33 Reviewed by: Marko Mäkelä and Vladislav Vaintroub
-
- 19 Jan, 2024 8 commits
-
-
Sergei Golubchik authored
use pkg-config to find pcre2, if possible rename PCRE_INCLUDES to use PKG_CHECK_MODULES naming, PCRE_INCLUDE_DIRS
-
Thirunarayanan Balathandayuthapani authored
MDEV-32968 InnoDB fails to restore tablespace first page from doublewrite buffer when page is empty recv_dblwr_t::find_first_page(): Free the allocated memory to read the first 3 pages from tablespace. innodb.doublewrite: Added sleep to ensure page cleaner thread wake up from my_cond_wait
-
Marko Mäkelä authored
Revert "MDEV-32899 InnoDB is holding shared dict_sys.latch while waiting for FOREIGN KEY child table lock on DDL" This reverts commit 569da6a7, commit 768a7361, and commit ba6bf7ad because of a regression that was filed as MDEV-33104.
-
Marko Mäkelä authored
In commit a55b951e (MDEV-26827) an error was introduced in a rarely executed code path of the buf_flush_page_cleaner() thread. As a result, the function buf_flush_LRU() could be invoked while not holding buf_pool.mutex. Reviewed by: Debarun Banerjee
-
Marko Mäkelä authored
buf_flush_LRU(): Display a warning if no pages could be evicted and no writes initiated. buf_pool_t::need_LRU_eviction(): Renamed from buf_pool_t::ran_out(). Check if the amount of free pages is smaller than innodb_lru_scan_depth instead of checking if it is 0. buf_flush_page_cleaner(): For the final LRU flush after a checkpoint flush, use a "budget" of innodb_io_capacity_max, like we do in the case when we are not in "furious" checkpoint flushing. Co-developed by: Debarun Banerjee Reviewed by: Debarun Banerjee Tested by: Matthias Leich
-
Oleksandr Byelkin authored
-
Daniel Black authored
Noted by Susmeet Khaire - thanks.
-
Marko Mäkelä authored
The directio(3C) function on Solaris is supported on NFS and UFS while the majority of users should be on ZFS, which is a copy-on-write file system that implements transparent compression and therefore cannot support unbuffered I/O. Let us remove the call to directio() and simply treat innodb_flush_method=O_DIRECT in the same way as the previous default value innodb_flush_method=fsync on Solaris. Also, let us remove some dead code around calls to os_file_set_nocache() on platforms where fcntl(2) is not usable with O_DIRECT. On IBM AIX, O_DIRECT is not documented for fcntl(2), only for open(2).
-
- 18 Jan, 2024 1 commit
-
-
Marko Mäkelä authored
-
- 17 Jan, 2024 4 commits
-
-
Sophist authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
The parameter innodb_undo_log_truncate=ON enables a multi-phased logic: 1. Any "producers" (new starting transactions) are prohibited from using the rollback segments that reside in the undo tablespace. 2. Any transactions that use any of the rollback segments must be committed or aborted. 3. The purge of committed transaction history must process all the rollback segments. 4. The undo tablespace is truncated and rebuilt. 5. The rollback segments are re-enabled for new transactions. There was one flaw in this logic: The first step was not being invoked as often as it could be, and therefore innodb_undo_log_truncate=ON would have no chance to work during a heavy write workload. Independent of innodb_undo_log_truncate, even after commit 86767bcc we are missing some chances to free processed undo log pages. If we prohibited the creation of new transactions in one busy rollback segment at a time, we would be eventually guaranteed to be able to free such pages. purge_sys_t::skipped_rseg: The current candidate rollback segment for shrinking the history independent of innodb_undo_log_truncate. purge_sys_t::iterator::free_history_rseg(): Renamed from trx_purge_truncate_rseg_history(). Implement the logic around purge_sys.m_skipped_rseg. purge_sys_t::truncate_undo_space: Renamed from truncate. purge_sys.truncate_undo_space.last: Changed the type to integer to get rid of some pointer dereferencing and conditional branches. purge_sys_t::truncating_tablespace(), purge_sys_t::undo_truncate_try(): Refactored from trx_purge_truncate_history(). Set purge_sys.truncate_undo_space.current if applicable, or return an already set purge_sys.truncate_undo_space.current. purge_coordinator_state::do_purge(): Invoke purge_sys_t::truncating_tablespace() as part of the normal work loop, to implement innodb_undo_log_truncate=ON as often as possible. trx_purge_truncate_rseg_history(): Remove a redundant parameter. trx_undo_truncate_start(): Replace dead code with a debug assertion. Correctness tested by: Matthias Leich Performance tested by: Axel Schwenke Reviewed by: Debarun Banerjee
-
- 16 Jan, 2024 2 commits
-
-
Alexander Barkov authored
Adding GEOMETRY type user variables.
-
Yuchen Pei authored
Since 0930eb86, system table creation needed for spider init is delayed to the signal_ddl_recovery_done callback. Since it is part of the init, failure should result in spider deinit. We also remove the call to spider_init_system_tables() from spider_db_init(), as it was removed in the commit mentioned above and accidentally restored in a merge.
-
- 15 Jan, 2024 1 commit
-
-
Thirunarayanan Balathandayuthapani authored
MDEV-32968 InnoDB fails to restore tablespace first page from doublewrite buffer when page is empty - InnoDB fails to find the space id from the page0 of the tablespace. In that case, InnoDB can use doublewrite buffer to recover the page0 and write into the file. - buf_dblwr_t::init_or_load_pages(): Loads only the pages which are valid.(page lsn >= checkpoint). To do that, InnoDB has to open the redo log before system tablespace, read the latest checkpoint information. recv_dblwr_t::find_first_page(): 1) Iterate the doublewrite buffer pages and find the 0th page 2) Read the tablespace flags, space id from the 0th page. 3) Read the 1st, 2nd and 3rd page from tablespace file and compare the space id with the space id which is stored in doublewrite buffer. 4) If it matches then we can write into the file. 5) Return space which matches the pages from the file. SysTablespace::read_lsn_and_check_flags(): Remove the retry logic for validating the first page. After restoring the first page from doublewrite buffer, assign tablespace flags by reading the first page. recv_recovery_read_max_checkpoint(): Reads the maximum checkpoint information from log file recv_recovery_from_checkpoint_start(): Avoid reading the checkpoint header information from log file Datafile::validate_first_page(): Throw error in case of first page validation fails.
-
- 14 Jan, 2024 1 commit
-
-
Tuukka Pasanen authored
Upstream Debian Sid which will become Debian Trixie (13) have dropped NCurses version 5 and changed dev package name just libncurses-dev
-
- 13 Jan, 2024 1 commit
-
-
Oleg Smirnov authored
Add INSERT ... SELECT to the list of commands that can be traced Approved by Sergei Petrunia (sergey@mariadb.com)
-
- 12 Jan, 2024 3 commits
-
-
Kristian Nielsen authored
The problem is the test is skipped after sourcing include/master-slave.inc. This leaves the slave threads running after the test is skipped, causing a following test to fail during rpl setup. Also rename have_normal_bzip.inc to the more appropriate _zlib. Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
-
Anel Husakovic authored
- Closes PR #2839 - Usage of `Column_definition_fix_attributes()` suggested by Alexandar Barkov - thanks bar, that is better than hook in server code (reverted 22f3ebe4) - This method is called after parsing the data type: * in `CREATE/ALTER TABLE` * in SP: return data type, parameter data type, variable data type - We want to disallow all these use cases of MYSQL_JSON. - Reviewer: bar@mariadb.com cvicentiu@mariadb.org
-
Anel Husakovic authored
This reverts commit 22f3ebe4.
-
- 11 Jan, 2024 4 commits
-
-
Anel Husakovic authored
Closes PR #2839 Reviewer: cvicentiu@mariadb.org
-
Anel Husakovic authored
- We don't test `json` MySQL tables from `std_data` since the error `ER_TABLE_NEEDS_REBUILD ` is invoked. However MDEV-32235 will override this test after merge, but leave it to show behavior and historical changes. - Closes PR #2833 Reviewer: <cvicentiu@mariadb.org> <serg@mariadb.com>
-
Yuchen Pei authored
-
Yuchen Pei authored
All Spider tables are recorded in the system table mysql.spider_tables. Deleting a spider table removes the corresponding rows from the system table, among other things. This patch makes it so that if spider could not find any record in the system table to delete for a given table, it should correctly report that no such Spider table exists.
-