- 27 Jun, 2022 2 commits
-
-
Vladislav Vaintroub authored
Fix concurrency error - avoid accessing deleted memory, when io_slots is resized. the deleted memory in this case was vftable pointer in aiocb::m_internal_task The fix avoids calling dummy release function, via a flag in task_group.
-
Vladislav Vaintroub authored
Resize the read/write slots, and recreate the io_context (for Linux libaio)
-
- 23 Jun, 2022 7 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
fil_name_process(): If the recovery of a tablespace was deferred, do invoke fil_ibd_load() even though the name in recv_spaces is not changing. This allows us to recover from a situation where there are many FILE_RENAME records, renaming a tablespace back and forth, and a FILE_MODIFY record that had been written by fil_names_clear(). Co-developed with: Thirunarayanan Balathandayuthapani
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
Fixed tpool timer implementation on POSIX. Prior to this patch, under some specific rare circumstances (concurrency related), timer callback execution might be skipped.
-
- 22 Jun, 2022 7 commits
-
-
Marko Mäkelä authored
When attempting to recover a database with an incorrect encryption key, the unencrypted page contents should be expected to differ from what was written before recovery. Let us suppress some more messages. This caused intermittent failures, depending on when the latest log checkpoint was triggered.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Sergei Petrunia authored
Histogram_json_hb::range_selectivity() may return small negative numbers due to rounding errors in the histogram. Make sure the returned value is non-negative. Add an assert to catch negative values that are not small. (attempt #2)
-
Marko Mäkelä authored
trx_undo_rec_copy(): Return nullptr if the undo record is corrupted. trx_undo_rec_get_undo_no(): Define inline with the declaration. trx_purge_dummy_rec: Replaced with a -1 pointer. row_undo_rec_get(), UndorecApplier::apply_undo_rec(): Check if trx_undo_rec_copy() returned nullptr. trx_purge_get_next_rec(): Return nullptr upon encountering any corruption, to signal the end of purge.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
On GNU/Linux, even though the C11 aligned_alloc() appeared in GNU libc early on, some custom memory allocators did not implement it until recently. For example, before gperftools/gperftools@d406f2285390c402e824dd28e6992f7f890dcdf9 the free() in tcmalloc would fail to free memory that was returned by aligned_alloc(), because the latter would map to the built-in allocator of libc. The Linux specific memalign() has a similar interface and is safer to use, because it has been available for a longer time. For AddressSanitizer, we will use aligned_alloc() so that the constraint on size can be enforced. buf_tmp_reserve_compression_buf(): When HAVE_ALIGNED_ALLOC holds, round up the size to be an integer multiple of the alignment. pfs_malloc(): In the unit test stub, round up the size to be an integer multiple of the alignment.
-
- 21 Jun, 2022 8 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Table_cache_instance: Define the structure aligned at the CPU cache line, and remove a pad[] data member. Krunal Bauskar reported this to improve performance on ARMv8. aligned_malloc(): Wrapper for the Microsoft _aligned_malloc() and the ISO/IEC 9899:2011 <stdlib.h> aligned_alloc(). Note: The parameters are in the Microsoft order (size, alignment), opposite of aligned_alloc(alignment, size). Note: The standard defines that size must be an integer multiple of alignment. It is enforced by AddressSanitizer but not by GNU libc on Linux. aligned_free(): Wrapper for the Microsoft _aligned_free() and the standard free(). HAVE_ALIGNED_ALLOC: A new test. Unfortunately, support for aligned_alloc() may still be missing on some platforms. We will fall back to posix_memalign() for those cases. HAVE_MEMALIGN: Remove, along with any use of the nonstandard memalign(). PFS_ALIGNEMENT (sic): Removed; we will use CPU_LEVEL1_DCACHE_LINESIZE. PFS_ALIGNED: Defined using the C++11 keyword alignas. buf_pool_t::page_hash_table::create(), lock_sys_t::hash_table::create(): lock_sys_t::hash_table::resize(): Pad the allocation size to an integer multiple of the alignment. Reviewed by: Vladislav Vaintroub
-
Marko Mäkelä authored
There was a race condition between log_checkpoint_low() and deleting or renaming data files. The scenario is as follows: 1. The buffer pool does not contain dirty pages. 2. A FILE_DELETE or FILE_RENAME record is written. 3. The checkpoint LSN will be moved ahead of the write of the record. 4. The server is killed before the file is actually renamed or deleted. We will prevent this race condition by ensuring that a log checkpoint cannot occur between the durable write and the file system operation: 1. Durably write the FILE_DELETE or FILE_RENAME record. 2. Perform the file system operation. 3. Allow any log checkpoint to proceed. mtr_t::commit_file(): Implement the DELETE or RENAME logic. fil_delete_tablespace(): Delegate some of the logic to mtr_t::commit_file(). fil_space_t::rename(): Delegate some logic to mtr_t::commit_file(). Remove the debug injection point fil_rename_tablespace_failure_2 because we do test RENAME failures without any debug injection. fil_name_write_rename_low(), fil_name_write_rename(): Remove. Tested by Matthias Leich
-
Marko Mäkelä authored
buf_page_create_low(): Before retrying, release the exclusive page latch in order to prevent an infinite loop in buf_pool_t::corrupted_evict().
-
Marko Mäkelä authored
-
- 20 Jun, 2022 5 commits
-
-
Vladislav Vaintroub authored
Disable building hashicorp encryption plugin statically
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
recv_recover_page(): Correct a debug assertion to refer to recv_sys.lsn, which may be ahead of log_sys.lsn during non-final recovery batches. In commit 685d958e (MDEV-14425) when some redundant LSN fields were removed, log_sys.log.scanned_lsn had been replaced with a reference to log_sys.lsn instead of the more appropriate recv_sys.lsn. recv_scan_log(): Remove a redundant call to log_sys.set_recovered_lsn(). It suffices to adjust the log_sys.lsn after parsing (and before starting to apply) records for the last batch. Note: Normally, log_sys.lsn must be the latest log sequence number. Before the final recovery batch, this may be safely violated, because log_write_up_to() will be a no-op. That function will be invoked by the buf_flush_page_cleaner thread to initiate writes of recovered pages.
-
Yusuke Abe authored
Reviewed by: Nayuta Yanagisawa
-
- 19 Jun, 2022 1 commit
-
-
Yuichiro Suzuki authored
Reviewed by: Nayuta Yanagisawa
-
- 18 Jun, 2022 1 commit
-
-
Brad Smith authored
-
- 17 Jun, 2022 5 commits
-
-
Daniel Black authored
Work around MDEV-28718 for now, but also optimize the interation of information_schema.SYSTEM_VARIABLES. Add test case to show that tzinfo data into bootstrap is desired functionality. Bug report thanks to Dan Lenski of AWS.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Sergei Golubchik authored
-
Yusuke Abe authored
Reviewed by: Nayuta Yanagisawa
-
- 16 Jun, 2022 4 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Marko Mäkelä authored
-
Sergei Golubchik authored
-