- 02 Dec, 2019 4 commits
-
-
Aleksey Midenkov authored
MDEV-21011 Table corruption reported for versioned partitioned table after DELETE: "Found a misplaced row" LIMIT history partitions cannot be checked by existing algorithm of check_misplaced_rows() because working history partition is incremented each time another one is filled. The existing algorithm gets record and tries to decide partition id for it by get_partition_id(). For LIMIT history it will just get first non-filled partition. To fix such partitions it is required to do REBUILD instead of REPAIR.
-
Aleksey Midenkov authored
When view is merged by DT_MERGE_FOR_INSERT it is then skipped from processing and doesn't update WHERE clause with vers_setup_conds(). Note that view itself cannot work in vers_setup_conds() because it doesn't have row_start, row_end fields. Thus it is required to descend down to material TABLE_LIST through calls of mysql_derived_prepare() and run vers_setup_conds() from there. Luckily, all views (views of views, views of views of views, etc.) are linked in one list through next_global pointer, so we can skip all views of views and get straight to non-view TABLE_LIST by checking its merge_underlying_list property for zero value (it is assigned by DT_MERGE_FOR_INSERT for merged derived tables). We have to do that only for UPDATE and DELETE. Other DML commands don't use WHERE clause. MDEV-21146 Assertion `m_lock_type == 2' in handler::ha_drop_table upon LOAD DATA LOAD DATA does not use WHERE and the above call of vers_setup_conds() is not needed. unit->prepare() led to wrongly locked temporary table.
-
Aleksey Midenkov authored
"write set" for replication finally got its correct place (mark_columns_per_binlog_row_image()). When done generally in mark_columns_needed_for_update() it affects optimization algorithm. used_key_is_modified, query_plan.using_io_buffer are wrongly set and that leads to wrong prepare_for_keyread() which limits read_set.
-
Aleksey Midenkov authored
Turn read cache off for update and multi-update for versioned table. no_cache is reinited on each TABLE open because it is applicable for specific algorithms. As a side fix vers_insert_history_row() honors vers_write setting. Aria with row_format=fixed uses IO_CACHE of type READ_CACHE for sequential read in update loop. When history row is inserted inside this loop the cache misses it and fails with error. TODO: Currently maria_extra() does not support SEQ_READ_APPEND. Probably it might be possible to use this type of cache.
-
- 29 Nov, 2019 2 commits
-
-
Sergei Golubchik authored
generalize the replacement
-
Sergei Golubchik authored
This reverts commit 0d345ec2. Upgrades from 8.0 don't work yet, one has to dump/restore manually to get the metadata out of the data dictionary.
-
- 28 Nov, 2019 3 commits
-
-
Sergei Golubchik authored
mariadb packages conflict with mysql-8.0
-
Sergei Golubchik authored
Obsoletes: cannot contain (x86-64) anymore Python shebang must be specific
-
Vladislav Vaintroub authored
Use my_thread_var::stack_ends_here inside lf_pinbox_real_free() for address where thread stack ends. Remove LF_PINS::stack_ends_here. It is not safe to assume that mysys_var that was used during pin allocation, remains correct during free. E.g with binlog group commit in Innodb, that frees pins for multiple Innodb transactions, it does not work correctly.
-
- 27 Nov, 2019 2 commits
-
-
Vladislav Vaintroub authored
Prior to this fix, when matching addresses using mask, extra bits could be used for comparison, e.g to match with "a.b.c.d/24" , 27 bits were compared rather than 24. The patch fixes the calculation.
-
Marko Mäkelä authored
As part of commit 3c09f148 trx_undo_commit_cleanup() was always invoked with noredo=true. The impact of this should be that some undo log pages may not be correctly freed if the server is killed and crash recovery will be performed. Similarly, if mariabackup --backup is being executed concurrently with user transaction commits, it could happen that some undo log pages in the backup will never be marked as free for reuse. It seems that this bug should not have any user-visible impact other than some undo pages being wasted.
-
- 25 Nov, 2019 1 commit
-
-
Aleksey Midenkov authored
-
- 22 Nov, 2019 2 commits
-
-
Aleksey Midenkov authored
Use my_localhost instead of NULL for share->hostname.
-
Aleksey Midenkov authored
MDEV-18957 UPDATE with LIMIT clause is wrong for versioned partitioned tables UPDATE, DELETE: replace linear search of current/historical records with vers_setup_conds(). Additional DML cases in view.test
-
- 21 Nov, 2019 1 commit
-
-
Eugene Kosov authored
row_log_table_get_pk_col(): read instant field value from instant alter table when it's required.
-
- 20 Nov, 2019 3 commits
-
-
Vlad Lesin authored
-
Marko Mäkelä authored
For ROW_FORMAT=REDUNDANT, we must reserve fixed-length dummy values for the CHAR columns in the metadata record. This is because in MariaDB Server 10.4, btr_cur_instant_init_low() will rely on dict_index_t::trx_id_offset being accurate for the metadata record.
-
Marko Mäkelä authored
In MariaDB Server 10.4, btr_cur_instant_init_low() assumes that all PRIMARY KEY columns that are internally variable-length will be encoded in 0 bytes in the metadata record. Sometimes, CHAR columns can be encoded as variable-length. We should not unnecessarily reserve space for a dummy string value in the metadata record.
-
- 19 Nov, 2019 2 commits
-
-
Alexey Botchkov authored
Do not fail fi all the partitions were pruned out.
-
Vlad Lesin authored
The fix consists of three commits backported from 10.3: 1) Cleanup isnan() portability checks (cherry picked from commit 7ffd7fe9) 2) Cleanup isinf() portability checks Original problem reported by Wlad: re-compilation of 10.3 on top of 10.2 build would cache undefined HAVE_ISINF from 10.2, whereas it is expected to be 1 in 10.3. std::isinf() seem to be available on all supported platforms. (cherry picked from commit bc469a0b) 3) Use std::isfinite in C++ code This is addition to parent revision fixing build failures. (cherry picked from commit 54999f4e)
-
- 18 Nov, 2019 3 commits
-
-
Marko Mäkelä authored
DropIndex, CreateIndex: Remove. The file row0trunc.cc only exists in MariaDB Server 10.3 so that the crash recovery of TRUNCATE TABLE operations from older 10.2 and 10.3 servers will work. This dead code was being used for implementing the MySQL 5.7 WL#6501 TRUNCATE TABLE that was replaced with a backup-safe implementation in MDEV-13564.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
buf_read_ibuf_merge_pages(): Discard any page numbers that are outside the current bounds of the tablespace, by invoking the function ibuf_delete_recs() that was introduced in MDEV-20934. This could avoid an infinite change buffer merge loop on innodb_fast_shutdown=0, because normally the change buffer merge would only be attempted if a page was successfully loaded into the buffer pool. dict_drop_index_tree(): Add the parameter trx_t*. To prevent the DROP TABLE crash, do not invoke btr_free_if_exists() if the entire .ibd file will be dropped. Thus, we will avoid a crash if the BTR_SEG_LEAF or BTR_SEG_TOP of the index is corrupted, and we will also avoid unnecessarily accessing the to-be-dropped tablespace via the buffer pool. In MariaDB 10.2, we disable the DROP TABLE fix if innodb_safe_truncate=0, because the backup-unsafe MySQL 5.7 WL#6501 form of TRUNCATE TABLE requires that the individual pages be freed inside the tablespace.
-
- 16 Nov, 2019 1 commit
-
-
Sergei Petrunia authored
-
- 15 Nov, 2019 2 commits
-
-
Sergei Petrunia authored
Fix partitioning and DS-MRR to work together - In ha_partition::index_end(): take into account that ha_innobase (and other engines using DS-MRR) will have inited=RND when initialized for DS-MRR scan. - In ha_partition::multi_range_read_next(): if the MRR scan is using HA_MRR_NO_ASSOCIATION mode, it is not guaranteed that the partition's handler will store anything into *range_info. - In DsMrr_impl::choose_mrr_impl(): ha_partition will inquire partitions about how much memory their MRR implementation needs by passing *buffer_size=0. DS-MRR code didn't know about this (actually it used uint for buffer size calculation and would have an under-flow). Returning *buffer_size=0 made ha_partition assume that partitions do not need MRR memory and pass the same buffer to each of them. Now, this is fixed. If DS-MRR gets *buffer_size=0, it will return the amount of buffer space needed, but not more than about @@mrr_buffer_size. * Fix ha_{innobase,maria,myisam}::clone. If ha_partition uses MRR on its partitions, and partition use DS-MRR, the code will call handler->clone with TABLE (*NOT partition*) name as an argument. DS-MRR has no way of knowing the partition name, so the solution was to have the ::clone() function for the affected storage engine to ignore the name argument and get it elsewhere.
-
Sergei Golubchik authored
using create_w_max_indexes_64.result as a template
-
- 14 Nov, 2019 3 commits
-
-
Marko Mäkelä authored
Apart from page latches (buf_block_t::lock), mini-transactions are keeping track of at most one dict_index_t::lock and fil_space_t::latch at a time, and in a rare case, purge_sys.latch. Let us introduce interfaces for acquiring an index latch or a tablespace latch. In a later version, we may want to introduce mtr_t members for holding a latched dict_index_t* and fil_space_t*, and replace the remaining use of mtr_t::m_memo with std::set<buf_block_t*> or with a map<buf_block_t*,byte*> pointing to log records.
-
Marko Mäkelä authored
In the test innodb.instant_alter,4k we would be flagging an error for too large row size. That error was previously only being reported if the table was being rebuilt. Thus, this merge is fixing a small omission in MDEV-11369 (instant ADD COLUMN).
-
Marko Mäkelä authored
-
- 13 Nov, 2019 4 commits
-
-
Sergei Petrunia authored
Fix incorrect change introduced in the fix for MDEV-20109. The patch tried to compute a more precise estimate for the record_count value in SJ-Materialization-Scan strategy (in Sj_materialization_picker::check_qep). However the new formula is worse as it produces extremely optimistic results in common cases where SJ-Materialization-Scan should be used) The old formula produces pessimistic results in cases when Sj-Materialization- Scan is unlikely to be a good choice anyway. So, the old behavior is better.
-
Eugene Kosov authored
Move row size check to early CREATE/ALTER TABLE phase. Stop checking on table open. dict_index_add_to_cache(): remove parameter 'strict', stop checking row size dict_index_t::record_size_info_t: this is a result of row size check operation create_table_info_t::row_size_is_acceptable(): performs row size check. Issues error or warning. Writes first overflow field to InnoDB log. create_table_info_t::create_table(): add row size check dict_index_t::record_size_info(): this is a refactored version of dict_index_t::rec_potentially_too_big(). New version doesn't change global state of a program but return all interesting info. And it's callers who decide how to handle row size overflow. dict_index_t::rec_potentially_too_big(): removed
-
Marko Mäkelä authored
memo_block_unfix(), memo_latch_release(): Merge to ReleaseLatches. memo_slot_release(), ReleaseAll: Clean up the formatting.
-
Marko Mäkelä authored
A search with PAGE_CUR_GE may land on the supremum record on a leaf page that is not the rightmost leaf page. This could occur when all keys on the current page are smaller than the search key, and the smallest key on the successor page is larger than the search key. ibuf_delete_recs(): Correct the debug assertion accordingly.
-
- 12 Nov, 2019 6 commits
-
-
Yasuhiro Horimoto authored
Closes #1407
-
Marko Mäkelä authored
-
Marko Mäkelä authored
mtr_t::Impl, mtr_t::Command: Merge to mtr_t. MTR_MAGIC_N: Remove. MTR_STATE_COMMITTING: Remove. This state was only being set internally during mtr_t::commit(). mtr_t::Command::m_locks_released: Remove (set-and-never-read member). mtr_t::Command::m_start_lsn: Replaced with the return value of finish_write() and a parameter to release_blocks(). mtr_t::Command::m_end_lsn: Removed as a duplicate of mtr_t::m_commit_lsn. mtr_t::Command::prepare_write(): Replace a switch () with a comparison against 0. Only 2 m_log_mode are allowed.
-
Marko Mäkelä authored
Avoid creating std::vector, and use single instead of double traversal.
-
Marko Mäkelä authored
-
Sujatha authored
Problem: ======== CURRENT_TEST: binlog_encryption.rpl_corruption mysqltest: In included file "./include/wait_for_slave_io_error.inc": ... At line 72: Slave stopped with wrong error code **** Slave stopped with wrong error code: 1743 (expected 1595,1913) **** Analysis: ======== The test emulates the corruption at the various stages of replication for example in binlog file, in network and in relay log etc. It verifies that all corruption cases are handled through appropriate error messages. The test cases which emulate network failure expect following errors. --ER_SLAVE_RELAY_LOG_WRITE_FAILURE (1595) --ER_NETWORK_READ_EVENT_CHECKSUM_FAILURE (1743) Ideally test should expect error codes as 1595 and 1743. But the test actually waits on incorrect error code 1595,1913 Fix: === Added appropriate error code for 'ER_NETWORK_READ_EVENT_CHECKSUM_FAILURE'. Replaced 1913 with 1743.
-
- 11 Nov, 2019 1 commit
-
-
Andrei Elkin authored
-