- 10 Jun, 2022 2 commits
-
-
Thirunarayanan Balathandayuthapani authored
Problem: ======== InnoDB FTS requesting the fts sync of the table once the fts cache size reaches 1/10 of innodb_ft_cache_size. But fts_sync() releases cache lock when writing the word. By doing this, InnoDB insert thread increases the innodb fts cache memory and SYNC operation will take more time to complete. Solution: ========= Remove the fts sync operation(FTS_MSG_SYNC_TABLE) from the fts optimize background thread. Instead of that, allow user thread to sync the InnoDB fts cache when the cache size exceeds 512 kb. User thread holds cache lock while doing cache syncing, it make sure that other threads doesn't add the docs into the cache. Removed FTS_MSG_SYNC_TABLE and its related function because we do remove the FTS_MSG_SYNC_TABLE message itself. Removed fts_sync_index_check() and all related function because other threads doesn't add while cache operation going on.
-
Marko Mäkelä authored
This fixes up commit 3d241eb9
-
- 09 Jun, 2022 6 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
PageConverter::update_header(): Remove an unnecessary write. The field that was originally called FIL_PAGE_FILE_FLUSH_LSN only made sense for the first page of the system tablespace (initially, for the first page of each file of the system tablespace). It never had any meaning for .ibd files, and it lost its original meaning in MariaDB Server 10.8.1 when commit b07920b6 (MDEV-27199) removed the ability to start without ib_logfile0. If the most significant 32 bits of the LSN are nonzero, this unnecessary write would write the wrong encryption key identifier to the page. The first page of any file is never encrypted, so normally those bytes should be 0 for any .ibd file.
-
Daniel Lewart authored
The zoneinfo directory is littered with non-timezone information files. These frequently contain extensions, not present in real timezone files. Alo leapseconds is frequently there and is not a timezone file.
-
- 08 Jun, 2022 8 commits
-
-
Tingyao Nian authored
Continue the effort of a previous commit (PR#2114) which changed the man pages titles from MariaDB to MySQL, to further update the man pages. Update the man page NAME sections to use mariadb-* instead of mysql* for MariaDB binaries that are drop-in replacements for MySQL equivalents, indicating that the commands are actually of the MariaDB version. Before: NAME mysql_upgrade - check tables for MariaDB upgrade ... After: NAME mariadb-upgrade - check tables for MariaDB upgrade (mysql_upgrade is now a symlink to mariadb-upgrade) ... All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services, Inc.
-
Oleg Smirnov authored
ha_innobase::build_template may initialize m_prebuilt->idx_cond even if there is no valid pushed_idx_cond_keyno. This potentially problematic piece of code was found while working on MDEV-27366
-
Marko Mäkelä authored
A prominent remaining source of crashes on corrupted index pages is page directory corruption. A frequent caller of page_dir_find_owner_slot() is page_rec_get_prev(). Some of those calls can be replaced with simpler logic that is less prone to fail. page_dir_find_owner_slot(), page_rec_get_prev(), page_rec_get_prev_const(), btr_pcur_move_to_prev(), btr_pcur_move_to_prev_on_page(), btr_cur_upd_rec_sys(), page_delete_rec_list_end(), rtr_page_copy_rec_list_end_no_locks(), rtr_page_copy_rec_list_start_no_locks(): Return an error code on failure. fil_space_t::io(), buf_page_get_low(): Use DB_CORRUPTION for out-of-bounds page reads. PageBulk::getSplitRec(), PageBulk::copyOut(): Simplify the code. btr_validate_level(): Prevent some more CHECK TABLE crashes on corrupted pages. btr_block_get(), btr_pcur_move_to_next_page(): Implement some checks that were previously only part of IndexPurge::next(). IndexPurge::next(): Use btr_pcur_move_to_next_page().
-
Marko Mäkelä authored
MariaDB never supported this form of preemption via high-priority transactions. This error code shold not have been added in the first place, in commit 2e814d47.
-
Daniel Black authored
and failing spider partition test. With some small datatype changes to the Linux/Solaris my_gethwaddr implementation the hardware address of AIX can be returned. This is an important aspect in Spider (and UUID). Spider test change reviewed by Nayuta Yanagisawa. my_gethwaddr review by Monty in #2081
-
Marko Mäkelä authored
fil_page_type_validate(): Remove. This debug check was mostly redundant and added little value to the code paths that deal with page_compressed or encrypted pages. fil_get_page_type_name(): Remove; unused function. fil_space_decrypt(): Return an error if the page is not supposed to be encrypted. It is possible that an unencrypted page contains a nonzero key_version field even though it is not supposed to be encrypted. Previously we would crash in such a situation. buf_page_decrypt_after_read(): Simplify the code. Remove some unnecessary error message about temporary tablespace corruption. This is where we would usually invoke fil_space_decrypt().
-
Marko Mäkelä authored
Even after commit 0b47c126 there are a few ib::fatal() calls in non-debug code that can be replaced easily. btr_page_reorganize_low(): On size invariant violation, return an error code instead of crashing. btr_check_blob_fil_page_type(): On an invalid page type, report an error but do not crash. btr_copy_blob_prefix(): Truncate the output if a page type is invalid. dict_load_foreign_cols(): On an error, return DB_CORRUPTION instead of crashing. fil_space_decrypt_full_crc32(), fil_space_decrypt_for_non_full_checksum(): On error, return DB_DECRYPTION_FAILED instead of crashing. fil_set_max_space_id_if_bigger(): Replace ib::fatal() with an equivalent ut_a() assertion.
-
chansuke authored
-
- 07 Jun, 2022 15 commits
-
-
Monty authored
This patch fixes the following issues in Aria error reporting in case of read errors & crashed tables: - Added the table name to the most error messages, including in case of read errors or when encrypting/decrypting a table. The format for error messages was changed sligtly to accomodate logging of errors from lower level routines. - If we got an read error from storage (hard disk, ssd, S3 etc) we only reported 'table is crashed'. Now the error number from the storage is reported. - Added checking of read failure from records_in_range() - Calls to ma_set_fatal_error() did not inform the SQL level of errors (to not spam the user with multiple error messages). Now the first error message and any fatal error messages are reported to the user.
-
Monty authored
- Print correct server version for header - Updated version number - One can now specify file name last (without -f)
-
Michael Widenius authored
Main-author: Sergei Petrunia
-
Michael Widenius authored
Part of: MDEV-28073 Slow query performance in MariaDB when using many tables s->key_dependent has a list of tables that are compared with key fields in the current table. However it does not take into account if a key field could be resolved by another table. This is because MariaDB expands 'join_tab->keyuse' to include all generated comparisons. For example: SELECT * from t1,t2,t3 where t1.key=t2.key and t2.key=t3.key In this case keyuse for t1 includes t2.key and t3.key and key_dependent contains 't2.map | t3.map' If we in best_extension_by_limited_search() consider t2,t1 then t1's key is fully defined, but we cannot do any prune of plans as s->key_dependent indicates that t3 is still needed. Fixed by calculating in best_access_patch the current key_dependent map of tables that is needed to satisfy all keys. This allows us to prune more bad plans earlier as soon as all keys can be used. We also set key_dependent to 0 if we found an EQ_REF key, as this an optimal key for the table and there is no reason to check more keys.
-
Michael Widenius authored
best_extension_by_limited_search() assumes that tables should be sorted according to size to be able to quickly disregard bad plans. However the current usage of swap_variables() will change the table order to a not sorted one for the next recursive call. This breaks the assumtion and causes performance issues when using many tables (we have to examine many more plans). This patch fixes this by ensuring that the original table order is kept for the not yet used tables when best_extension_by_limited_search() is called. This was done by always calling swap_variables() for each table and restoring the original table order at exit. Some test changed: - In a majority of the test the change was that two "identical tables" where swapped and the optimzer is now using the first/smaller table - In few test the table order was changed. The new plan looks identical or slighly better than the original.
-
Sergei Petrunia authored
(Try 2) The code that updates semi-join optimization state for a join order prefix had several bugs. The visible effect was bad optimization for FirstMatch or LooseScan strategies: they either weren't considered when they should have been, or considered when they shouldn't have been. In order to hit the bug, the optimizer needs to consider several different join prefixes in a certain order. Queries with "obvious" query plans which prune all join orders except one are not affected. Internally, the bugs in updates of semi-join state were: 1. restore_prev_sj_state() assumed that "we assume remaining_tables doesnt contain @tab" which wasn't true. 2. Another bug in this function: it did remove bits from join->cur_sj_inner_tables but never added them. 3. greedy_search() adds tables into the join prefix but neglects to update the semi-join optimization state. (It does update nested outer join state, see this call: check_interleaving_with_nj(best_table) but there's no matching call to update the semi-join state. (This wasn't visible because most of the state is in the POSITION structure which is updated. But there is also state in JOIN, too) The patch: - Fixes all of the above - Adds JOIN::dbug_verify_sj_inner_tables() which is used to verify the state is correct at every step. - Renames advance_sj_state() to optimize_semi_joins(). = Introduces update_sj_state() which ideally should have been called "advance_sj_state" but I didn't reuse the name to not create confusion.
-
Monty authored
Main fix was replacing read_time+= with read_time I also did updated the 'identical' code in optimize_straight_join) and best_extension_by_limited_search() to make them eaiser to compare. Reviewer: Sergei Petrunia <sergey@mariadb.com>
-
Monty authored
-
Sergei Golubchik authored
otherwise following tests that crash the server will see them corrupted
-
Sergei Golubchik authored
-
Sergei Petrunia authored
(Try 2) (Cherry-pick back into 10.3) The code that updates semi-join optimization state for a join order prefix had several bugs. The visible effect was bad optimization for FirstMatch or LooseScan strategies: they either weren't considered when they should have been, or considered when they shouldn't have been. In order to hit the bug, the optimizer needs to consider several different join prefixes in a certain order. Queries with "obvious" query plans which prune all join orders except one are not affected. Internally, the bugs in updates of semi-join state were: 1. restore_prev_sj_state() assumed that "we assume remaining_tables doesnt contain @tab" which wasn't true. 2. Another bug in this function: it did remove bits from join->cur_sj_inner_tables but never added them. 3. greedy_search() adds tables into the join prefix but neglects to update the semi-join optimization state. (It does update nested outer join state, see this call: check_interleaving_with_nj(best_table) but there's no matching call to update the semi-join state. (This wasn't visible because most of the state is in the POSITION structure which is updated. But there is also state in JOIN, too) The patch: - Fixes all of the above - Adds JOIN::dbug_verify_sj_inner_tables() which is used to verify the state is correct at every step. - Renames advance_sj_state() to optimize_semi_joins(). = Introduces update_sj_state() which ideally should have been called "advance_sj_state" but I didn't reuse the name to not create confusion.
-
Marko Mäkelä authored
In any files that were created in the innodb_checksum_algorithm=full_crc32 format (commit c0f47a4a) any unused data fields will have been zero-initialized (commit 3926673c).
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
We do not want subsequent test executions to fail due to messages about mysql.help_% tables needing recovery. Thanks to Sergei Golubchik for noticing this.
-
- 06 Jun, 2022 9 commits
-
-
Monty authored
-
Monty authored
Reading the last page of table with "dynamic page" format would generate an error when reading after the last row. This was never noticed as when using Aria as a handler any error messages generated by _ma_set_fatal_error() was ignored.
-
Monty authored
If we got a read error from S3, we did not signal threads waiting to read blocks in the read-range. This caused these threads to hang forever. There is still one issue left that the S3 error will be logged as an 'table is crashed' error instead of the IO error. This will be fixed by a larger patch in 10.6 that improves error reporting from Aria. There is no test case for this as it is very hard to repeat. I tested this with a patch that causes random read failures in S3 used perl multi-threaded test with 8 threads to simulate reads. This patch fixes all found hangs.
-
Monty authored
DBUG_PUSH_EMPTY is used by thr_mutex.cc. If there are 4G of DBUG_PUSH_EMPTY calls, then DBUG_POP_EMPTY will cause a crash when DBUGCloseFile() will try to free an object that was never allocated.
-
Marko Mäkelä authored
We will introduce an optional log record OPT_PAGE_CHECKSUM for recording page checksums, so that more inconsistencies on crash recovery may be caught. mtr_t::page_checksum(const buf_page_t&): Write OPT_PAGE_CHECKSUM (currently not for ROW_FORMAT=COMPRESSED pages). mtr_t::do_write(): Write OPT_PAGE_CHECKSUM records for all pages (currently, in debug builds only). mtr_t::is_logged(): Return whether log should be written. mtr_t::set_log_mode_sub(const mtr_t&): Set the logging mode of a sub-minitransaction when another mini-transaction is holding latches on some modified pages. When creating or freeing BLOB pages, we may only write OPT_PAGE_CHECKSUM records in the main mini-transaction, after all changes have been written to the log. MTR_LOG_SUB: Log mode for a sub-mini-transaction. mtr_t::free(): Define non-inline, and invoke MarkFreed. MarkFreed: For any matching page in the mini-transaction log, change the first entry to say MTR_MEMO_PAGE_X_MODIFY and any subsequent entries to MTR_MEMO_PAGE_X_FIX. FindModified: Simplify a condition. MTR_MEMO_MODIFY can only be set if MTR_MEMO_PAGE_X_FIX or MTR_MEMO_PAGE_SX_FIX are set. FindBlockX: Consider also MTR_MEMO_PAGE_X_MODIFY. recv_sys_t::parse(): Store OPT_PAGE_CHECKSUM records. log_phys_t::apply(): Validate OPT_PAGE_CHECKSUM records. log_phys_t::page_checksum(): Validate an OPT_PAGE_CHECKSUM record. Tested by: Matthias Leich
-
Marko Mäkelä authored
--debug-dbug=d,intermittent_read_failure is effective after the database has been started up. --debug-dbug=d,intermittent_recovery_failure is always effective, including during recovery.
-
Marko Mäkelä authored
The approach to handling corruption that was chosen by Oracle in commit 177d8b0c is not really useful. Not only did it actually fail to prevent InnoDB from crashing, but it is making things worse by blocking attempts to rescue data from or rebuild a partially readable table. We will try to prevent crashes in a different way: by propagating errors up the call stack. We will never mark the clustered index persistently corrupted, so that data recovery may be attempted by reading from the table, or by rebuilding the table. This should also fix MDEV-13680 (crash on btr_page_alloc() failure); it was extensively tested with innodb_file_per_table=0 and a non-autoextend system tablespace. We should now avoid crashes in many cases, such as when a page cannot be read or allocated, or an inconsistency is detected when attempting to update multiple pages. We will not crash on double-free, such as on the recovery of DDL in system tablespace in case something was corrupted. Crashes on corrupted data are still possible. The fault injection mechanism that is introduced in the subsequent commit may help catch more of them. buf_page_import_corrupt_failure: Remove the fault injection, and instead corrupt some pages using Perl code in the tests. btr_cur_pessimistic_insert(): Always reserve extents (except for the change buffer), in order to prevent a subsequent allocation failure. btr_pcur_open_at_rnd_pos(): Merged to the only caller ibuf_merge_pages(). btr_assert_not_corrupted(), btr_corruption_report(): Remove. Similar checks are already part of btr_block_get(). FSEG_MAGIC_N_BYTES: Replaces FSEG_MAGIC_N_VALUE. dict_hdr_get(), trx_rsegf_get_new(), trx_undo_page_get(), trx_undo_page_get_s_latched(): Replaced with error-checking calls. trx_rseg_t::get(mtr_t*): Replaces trx_rsegf_get(). trx_rseg_header_create(): Let the caller update the TRX_SYS page if needed. trx_sys_create_sys_pages(): Merged with trx_sysf_create(). dict_check_tablespaces_and_store_max_id(): Do not access DICT_HDR_MAX_SPACE_ID, because it was already recovered in dict_boot(). Merge dict_check_sys_tables() with this function. dir_pathname(): Replaces os_file_make_new_pathname(). row_undo_ins_remove_sec(): Do not modify the undo page by adding a terminating NUL byte to the record. btr_decryption_failed(): Report decryption failures dict_set_corrupted_by_space(), dict_set_encrypted_by_space(), dict_set_corrupted_index_cache_only(): Remove. dict_set_corrupted(): Remove the constant parameter dict_locked=false. Never flag the clustered index corrupted in SYS_INDEXES, because that would deny further access to the table. It might be possible to repair the table by executing ALTER TABLE or OPTIMIZE TABLE, in case no B-tree leaf page is corrupted. dict_table_skip_corrupt_index(), dict_table_next_uncorrupted_index(), row_purge_skip_uncommitted_virtual_index(): Remove, and refactor the callers to read dict_index_t::type only once. dict_table_is_corrupted(): Remove. dict_index_t::is_btree(): Determine if the index is a valid B-tree. BUF_GET_NO_LATCH, BUF_EVICT_IF_IN_POOL: Remove. UNIV_BTR_DEBUG: Remove. Any inconsistency will no longer trigger assertion failures, but error codes being returned. buf_corrupt_page_release(): Replaced with a direct call to buf_pool.corrupted_evict(). fil_invalid_page_access_msg(): Never crash on an invalid read; let the caller of buf_page_get_gen() decide. btr_pcur_t::restore_position(): Propagate failure status to the caller by returning CORRUPTED. opt_search_plan_for_table(): Simplify the code. row_purge_del_mark(), row_purge_upd_exist_or_extern_func(), row_undo_ins_remove_sec_rec(), row_undo_mod_upd_del_sec(), row_undo_mod_del_mark_sec(): Avoid mem_heap_create()/mem_heap_free() when no secondary indexes exist. row_undo_mod_upd_exist_sec(): Simplify the code. row_upd_clust_step(), dict_load_table_one(): Return DB_TABLE_CORRUPT if the clustered index (and therefore the table) is corrupted, similar to what we do in row_insert_for_mysql(). fut_get_ptr(): Replace with buf_page_get_gen() calls. buf_page_get_gen(): Return nullptr and *err=DB_CORRUPTION if the page is marked as freed. For other modes than BUF_GET_POSSIBLY_FREED or BUF_PEEK_IF_IN_POOL this will trigger a debug assertion failure. For BUF_GET_POSSIBLY_FREED, we will return nullptr for freed pages, so that the callers can be simplified. The purge of transaction history will be a new user of BUF_GET_POSSIBLY_FREED, to avoid crashes on corrupted data. buf_page_get_low(): Never crash on a corrupted page, but simply return nullptr. fseg_page_is_allocated(): Replaces fseg_page_is_free(). fts_drop_common_tables(): Return an error if the transaction was rolled back. fil_space_t::set_corrupted(): Report a tablespace as corrupted if it was not reported already. fil_space_t::io(): Invoke fil_space_t::set_corrupted() to report out-of-bounds page access or other errors. Clean up mtr_t::page_lock() buf_page_get_low(): Validate the page identifier (to check for recently read corrupted pages) after acquiring the page latch. buf_page_t::read_complete(): Flag uninitialized (all-zero) pages with DB_FAIL. Return DB_PAGE_CORRUPTED on page number mismatch. mtr_t::defer_drop_ahi(): Renamed from mtr_defer_drop_ahi(). recv_sys_t::free_corrupted_page(): Only set_corrupt_fs() if any log records exist for the page. We do not mind if read-ahead produces corrupted (or all-zero) pages that were not actually needed during recovery. recv_recover_page(): Return whether the operation succeeded. recv_sys_t::recover_low(): Simplify the logic. Check for recovery error. Thanks to Matthias Leich for testing this extensively and to the authors of https://rr-project.org for making it easy to diagnose and fix any failures that were found during the testing.
-
Marko Mäkelä authored
The types btr_latch_mode and mtr_memo_type_t are partly derived from rw_lock_type_t. Despite that, some code for converting between them is using conditions instead of bitwise arithmetics. Let us define btr_latch_mode in such a way that more conversions to rw_lock_type_t are possible by bitwise and. Some SPATIAL INDEX code that assumed !(BTR_MODIFY_TREE & BTR_MODIFY_LEAF) was adjusted.
-
Marko Mäkelä authored
fil_space_t::is_freed(): Check if a page is in freed_ranges. fil_space_t::flush_freed(): Replaces buf_flush_freed_pages().
-