- 01 Dec, 2020 1 commit
-
-
Vlad Lesin authored
The new option --log-innodb-page-corruption is introduced. When this option is set, backup is not interrupted if innodb corrupted page is detected. Instead it logs all found corrupted pages in innodb_corrupted_pages file in backup directory and finishes with error. For incremental backup corrupted pages are also copied to .delta file, because we can't do LSN check for such pages during backup, innodb_corrupted_pages will also be created in incremental backup directory. During --prepare, corrupted pages list is read from the file just after redo log is applied, and each page from the list is checked if it is allocated in it's tablespace or not. If it is not allocated, then it is zeroed out, flushed to the tablespace and removed from the list. If all pages are removed from the list, then --prepare is finished successfully and innodb_corrupted_pages file is removed from backup directory. Otherwise --prepare is finished with error message and innodb_corrupted_pages contains the list of the pages, which are detected as corrupted during backup, and are allocated in their tablespaces, what means backup directory contains corrupted innodb pages, and backup can not be considered as consistent. For incremental --prepare corrupted pages from .delta files are applied to the base backup, innodb_corrupted_pages is read from both base in incremental directories, and the same action is proceded for corrupted pages list as for full --prepare. innodb_corrupted_pages file is modified or removed only in base directory. If DDL happens during backup, it is also processed at the end of backup to have correct tablespace names in innodb_corrupted_pages.
-
- 30 Nov, 2020 6 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
For some reason, InnoDB debug tests on Windows fail due to rw_lock_t if the function call overhead for some os_thread_ code is removed. This change worked fine on Windows in combination with MDEV-24142.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Let us always base srw_lock on our own std::atomic<uint32_t> based rw_lock. In this way, we can extend the locks in a portable way across all platforms. We will use futex system calls where available: Linux, OpenBSD, and Microsoft Windows. Elsewhere, we will emulate futex with a mutex and a condition variable. Thanks to Daniel Black for testing this on OpenBSD.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
os_thread_pf(): Remove. os_thread_eq(), os_thread_yield(), os_thread_get_curr_id(): Define as macros. ut_print_timestamp(), ut_sprintf_timestamp(): Simplify.
-
- 28 Nov, 2020 1 commit
-
-
Marko Mäkelä authored
-
- 27 Nov, 2020 1 commit
-
-
Igor Babaev authored
When executing set operations in a pipeline using only one temporary table additional scans of intermediate results may be needed. The scans are performed with usage of the rnd_next() handler function that might leave record buffers used for the temporary table not in a state that is good for following writes into the table. For example it happens for aria engine when the last call of rnd_next() encounters only deleted records. Thus a cleanup of record buffers is needed after each such scan of the temporary table. Approved by Oleksandr Byelkin <sanja@mariadb.com>
-
- 26 Nov, 2020 7 commits
-
-
Monty authored
-
Monty authored
-
Monty authored
This change is needed in 10.5 to avoid extra malloc calls in val_str(). In 10.6 it's not needed anymore but the extra +1 byte doesn't harm that much.
-
Monty authored
- Fold long comment rows and updated comments - Moved one private function in class Item_func_rand among other private functions
-
Monty authored
This is for Oracle compatiblity. ENABLED is in Oracle the default case and just ensures that the NOT NULL constraints will be tested, which is also default in MariaDB
-
Monty authored
-
Marko Mäkelä authored
-
- 25 Nov, 2020 11 commits
-
-
Marko Mäkelä authored
A side effect of MDEV-16264 is that a large number of threads will be created at server startup, to be destroyed after a minute or two. One source of such thread creation is srv_start_periodic_timer(). InnoDB is creating 3 periodic tasks: srv_master_callback (1Hz) srv_error_monitor_task (1Hz), and srv_monitor_task (0.2Hz). It appears that we can merge srv_error_monitor_task and srv_monitor_task and have them invoked 4 times per minute (every 15 seconds). This will affect our ability to enforce innodb_fatal_semaphore_wait_threshold and some computations around BUF_LRU_STAT_N_INTERVAL. We could remove srv_master_callback along with the DROP TABLE queue at some point of time in the future. We must keep it independent of the innodb_fatal_semaphore_wait_threshold detection, because the background DROP TABLE queue could get stuck due to dict_sys being locked by another thread. For now, srv_master_callback must be invoked once per second, so that innodb_flush_log_at_timeout=1 can work. BUF_LRU_STAT_N_INTERVAL: Reduce the precision and extend the time from 50*1 second to 4*15 seconds. srv_error_monitor_timer: Remove. MAX_MUTEX_NOWAIT: Increase from 20*1 second to 2*15 seconds. srv_refresh_innodb_monitor_stats(): Avoid a repeated call to time(NULL). Change the interval to less than 60 seconds. srv_monitor(): Renamed from srv_monitor_task. srv_monitor_task(): Renamed from srv_error_monitor_task(). Invoked only once in 15 seconds. Invoke also srv_monitor(). Increase the fatal_cnt threshold from 10*1 second to 1*15 seconds. sync_array_print_long_waits_low(): Invoke time(NULL) only once. Remove a bogus message about printouts for 30 seconds. Those printouts were effectively already disabled in MDEV-16264 (commit 5e62b6a5).
-
Marko Mäkelä authored
The purpose of the InnoDB page cleaner subsystem is to write out modified pages from the buffer pool to data files. When the innodb_max_dirty_pages_pct_lwm is not exceeded or innodb_adaptive_flushing=ON decides not to write out anything, the page cleaner should keep sleeping indefinitely until the state of the system changes: a dirty page is added to the buffer pool such that the page cleaner would no longer be idle. buf_flush_page_cleaner(): Explicitly note when the page cleaner is idle. When that happens, use mysql_cond_wait() instead of mysql_cond_timedwait(). buf_flush_insert_into_flush_list(): Wake up the page cleaner if needed. innodb_max_dirty_pages_pct_update(), innodb_max_dirty_pages_pct_lwm_update(): Wake up the page cleaner just in case. Note: buf_flush_ahead(), buf_flush_wait_flushed() and shutdown are already waking up the page cleaner thread.
-
Marko Mäkelä authored
-
Vladislav Vaintroub authored
Kudos to Marko for finding.
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
This partially reverts commit 6479006e. Remove the constant tpool::aio::N_PENDING, which has no intrinsic meaning for the tpool.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
tpool::aio::N_PENDING: Replaces OS_AIO_N_PENDING_IOS_PER_THREAD. This limits two similar things: the number of outstanding requests that a thread may io_submit(), and the number of completed requests collected at a time by io_getevents().
-
Marko Mäkelä authored
In the asynchronous I/O interface, InnoDB is invoking io_getevents() with a timeout value of half a second, and requesting exactly 1 event at a time. The reason to have such a short timeout is to facilitate shutdown. We can do better: Use an infinite timeout, wait for a larger maximum number of events. On shutdown, we will invoke io_destroy(), which should lead to the io_getevents system call reporting EINVAL. my_getevents(): Reimplement the libaio io_getevents() by only invoking the system call. The library implementation would try to elide the system call and return 0 immediately if aio_ring_is_empty() holds. Here, we do want a blocking system call, not 100% CPU usage. Neither do we want the aio_ring_is_empty() trigger SIGSEGV because it is dereferencing some memory that was freed by io_destroy().
-
- 24 Nov, 2020 13 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
For reading trx_t::state we can avoid acquiring trx_t::mutex. Atomic load and store should be similar to normal load and store on most instruction set architectures. The atomicity of the operation would merely prohibit the compiler from reordering some operations.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
We must avoid acquiring a latch while we are already holding one. The tablespace latch was being acquired recursively in some operations that allocate or free pages.
-
Marko Mäkelä authored
fts_cache_t::init_lock: Replace with mutex. This was only acquired in exclusive mode. fts_cache_t::lock: Replace with mutex. The only read-lock user was i_s_fts_index_cache_fill() for producing content for the view INFORMATION_SCHEMA.INNODB_FT_INDEX_CACHE.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Many InnoDB rw-locks unnecessarily depend on the complex InnoDB rw_lock_t implementation that support the SX lock mode as well as recursive acquisition of X or SX locks. One of them is the bunch of adaptive hash index search latches, instrumented as btr_search_latch in PERFORMANCE_SCHEMA. Let us introduce a simpler lock for those in order to reduce overhead. srw_lock: A simple read-write lock that does not support recursion. On Microsoft Windows, this wraps SRWLOCK, only adding runtime overhead if PERFORMANCE_SCHEMA is enabled. On Linux (all architectures), this is implemented with std::atomic<uint32_t> and the futex system call. On other platforms, we will wrap mysql_rwlock_t with zero runtime overhead. The PERFORMANCE_SCHEMA instrumentation differs from InnoDB rw_lock_t in that we will only invoke PSI_RWLOCK_CALL(start_rwlock_wrwait) or PSI_RWLOCK_CALL(start_rwlock_rdwait) if there is an actual conflict.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
The greedy fetch_add(1) approach of read_trylock() may cause starvation of a waiting write lock request. Let us use a compare-and-swap for the read lock acquisition in order to guarantee the progress of writers.
-
Marko Mäkelä authored
-