- 12 Nov, 2019 9 commits
-
-
Eugene Kosov authored
Basically, use more List<T>::iterator. This patch required adding two more overloads to new iterator for convenience.
-
Kentoku SHIBA authored
It's just added regression tests.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
mtr_t::Impl, mtr_t::Command: Merge to mtr_t. MTR_MAGIC_N: Remove. MTR_STATE_COMMITTING: Remove. This state was only being set internally during mtr_t::commit(). mtr_t::Command::m_locks_released: Remove (set-and-never-read member). mtr_t::Command::m_start_lsn: Replaced with the return value of finish_write() and a parameter to release_blocks(). mtr_t::Command::m_end_lsn: Removed as a duplicate of mtr_t::m_commit_lsn. mtr_t::Command::prepare_write(): Replace a switch () with a comparison against 0. Only 2 m_log_mode are allowed.
-
Marko Mäkelä authored
Avoid creating std::vector, and use single instead of double traversal.
-
Marko Mäkelä authored
-
Sujatha authored
Problem: ======== CURRENT_TEST: binlog_encryption.rpl_corruption mysqltest: In included file "./include/wait_for_slave_io_error.inc": ... At line 72: Slave stopped with wrong error code **** Slave stopped with wrong error code: 1743 (expected 1595,1913) **** Analysis: ======== The test emulates the corruption at the various stages of replication for example in binlog file, in network and in relay log etc. It verifies that all corruption cases are handled through appropriate error messages. The test cases which emulate network failure expect following errors. --ER_SLAVE_RELAY_LOG_WRITE_FAILURE (1595) --ER_NETWORK_READ_EVENT_CHECKSUM_FAILURE (1743) Ideally test should expect error codes as 1595 and 1743. But the test actually waits on incorrect error code 1595,1913 Fix: === Added appropriate error code for 'ER_NETWORK_READ_EVENT_CHECKSUM_FAILURE'. Replaced 1913 with 1743.
-
Eugene Kosov authored
New iterator has the fastest possible implementation: just moves one pointer. It's faster that List_iterator and List_iterator_fast: both do more on increment. Overall patch brings: 1) work compile times 2) possibly(!) worse debug build performance 3) definitely better optimized build performance 4) ability to write less code 5) ability to write less bug-prone code
-
- 11 Nov, 2019 15 commits
-
-
Andrei Elkin authored
-
Andrei Elkin authored
-
Andrei Elkin authored
-
Marko Mäkelä authored
fseg_create(): Initialize FSEG_FRAG_ARRY by a single MLOG_MEMSET record. flst_zero_addr(), flst_init(): Optimize away redundant writes. fseg_free_page_low(): Write FIL_NULL by MLOG_MEMSET.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
The XDES_CLEAN_BIT is always set for every element of the page allocation bitmap in the extent descriptor pages. Do not bother touching it, to avoid redundant writes.
-
Marko Mäkelä authored
The DICT_HDR_MAX_SPACE_ID was already zero-initialized at page allocation.
-
Marko Mäkelä authored
page_rec_write_field(): Remove. dict_create_index_tree_step(): If the SYS_INDEXES.PAGE does not change, do not update it in the data dictionary. Typically, all index page numbers would be unchanged before and after IMPORT TABLESPACE, except if some secondary indexes were created after loading some data. btr_root_fseg_adjust_on_import(): Remove the redundant mtr_t* parameter. Redo logging is disabled during the page adjustments that IMPORT TABLESPACE is performing.
-
Marko Mäkelä authored
The root page must never have any siblings, so it is unnecessary to clear those fields.
-
Marko Mäkelä authored
fsp_alloc_seg_inode_page(): Ever since commit 3926673c all newly allocated pages are zero-initialized. Assert that this is the case for the FSEG_ID fields.
-
Marko Mäkelä authored
btr_store_big_rec_extern_fields(): Remove the redundant initialization of the most significant 32 bits of BTR_EXTERN_LEN. InnoDB never supported BLOBs that are longer than 4GiB. In fact, dtuple_convert_big_rec() would write emit an error message if a clustered index record tuple would exceed 1,000,000,000 bytes in length. The BTR_EXTERN_LEN in the BLOB pointers in clustered index leaf page records is zero-initialized at least since commit 41bb3537
-
Marko Mäkelä authored
Remove the unnecessary retrieval and null-modifications of the preceding page.
-
Marko Mäkelä authored
Remove the redundant parameter mtr_t*. Make use of page_has_prev(), page_has_next() whenever possible.
-
Marko Mäkelä authored
Remove the unnecessary retrieval and null-modifications of the preceding page.
-
- 10 Nov, 2019 1 commit
-
-
Andrei Elkin authored
MDEV-19376 Repl_semi_sync_master::commit_trx assertion failure: ... || !m_active_tranxs->is_tranx_end_pos(trx_wait_binlog_name, trx_wait_binlog_pos) The assert indicates that the current transaction got caught uncleaned from the semisync master's cache when it is signaled to proceed upon its ack receive. The reason of missed cleanup turns out to be a flaw in the gtid connect mode. A submitted by connecting slave value of its last received event's binlog file *name* was adopted into {{Repl_semi_sync_master::m_reply_file_name}} as a part of semisync initialization. Notice that the initialization still refines the position part of the submitted last received event's binlog coordinates. The master side binlog filename:pos refinement is specific to the gtid connect mode for purpose of computing the latest binlog file to resume slave feeding from. Effectively in the gtid connect mode the computed resumption filename:pos may appear smaller in which case a new post-connect time committing transaction may be logged with its filename:pos also less than the submitted coordinates and that triggers the assert. Fixed with making the semisync initialization to use the refined filename:pos. It is guaranteed to be less than any new generated transaction's binlog:pos.
-
- 08 Nov, 2019 8 commits
-
-
Daniel Bartholomew authored
-
Daniel Bartholomew authored
-
Daniel Bartholomew authored
-
Daniel Bartholomew authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
The function mach_read_ulint() is a wrapper for the lower-level functions mach_read_from_1(), mach_read_from_2(), mach_read_from_8(). Invoke those functions directly, for better readability of the code. mtr_t::read_ulint(), mtr_read_ulint(): Remove. Yes, we will lose the ability to assert that the read is covered by the mini-transaction. We would still check that on writes, and any writes that wrongly bypass mini-transaction logging would likely be caught by stress testing with Mariabackup.
-
Hartmut Holzgraefe authored
Make sure failure to find mariabackup binary does not terminate the script silently, terminate with a clear error message instead
-
Marko Mäkelä authored
Always use the MLOG_MEMSET record for writing FIL_NULL, because it is more compact.
-
- 07 Nov, 2019 2 commits
-
-
Varun Gupta authored
The issue here is the wrong estimate of the cardinality of a partial join, the cardinality is too high because the function table_cond_selectivity() returns an absurd number 100 while selectivity cannot be greater than 1. When accessing table t by outer reference t1.a via index we do not perform any range analysis for t. Yet we see TABLE::quick_key_parts[key] and TABLE->quick_rows[key] contain a non-zero value though these should have been remained untouched and equal to 0. Thus real cause of the problem is that TABLE::init does not clean the arrays TABLE::quick_key_parts[] and TABLE::>quick_rows[]. It should have done it because the TABLE structure created for any instance of a table can be reused for many queries.
-
Marko Mäkelä authored
-
- 06 Nov, 2019 5 commits
-
-
Marko Mäkelä authored
Due to MDEV-12288, the slow shutdown in MariaDB 10.3 will include resetting the DB_TRX_ID for all inserted records. This might cause the 60-second shutdown_server timeout to be exceeded. Let us wait for the purge to complete before initiating slow shutdown.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
dict_index_add_to_cache(): Make the 'index' a reference to a pointer, so that the caller will avoid the expensive call to dict_index_get_if_in_cache_low().
-
Marko Mäkelä authored
-