- 03 Dec, 2019 3 commits
-
-
Vicențiu Ciorbaru authored
-
Vicențiu Ciorbaru authored
-
Marko Mäkelä authored
mlog_write_initial_log_record_low(): Do not allow the MLOG_TRUNCATE record to be written.
-
- 02 Dec, 2019 9 commits
-
-
Jan Lindström authored
Conflicts: mysql-test/suite/galera/t/galera_binlog_event_max_size_max-master.opt mysql-test/suite/innodb/r/innodb-mdev-7513.result mysql-test/suite/innodb/t/innodb-mdev-7513.test mysql-test/suite/wsrep/disabled.def storage/innobase/ibuf/ibuf0ibuf.cc
-
Robert Bindar authored
-
Faustin Lammler authored
The script is intented to be sourced by other SST script so it should not be executable (or should have a script header). This is causing a warning on Debian Lintian tool, see: https://salsa.debian.org/faust-guest/mariadb-10.3/-/jobs/431900
-
Faustin Lammler authored
The lintian check complains on spelling error: https://salsa.debian.org/mariadb-team/mariadb-10.3/-/jobs/95739
-
Aleksey Midenkov authored
Don't do skip_setup_conds() unless all errors are checked. Fixes following errors: ER_PERIOD_NOT_FOUND ER_VERS_QUERY_IN_PARTITION ER_VERS_ENGINE_UNSUPPORTED ER_VERS_NOT_VERSIONED
-
Aleksey Midenkov authored
MDEV-21011 Table corruption reported for versioned partitioned table after DELETE: "Found a misplaced row" LIMIT history partitions cannot be checked by existing algorithm of check_misplaced_rows() because working history partition is incremented each time another one is filled. The existing algorithm gets record and tries to decide partition id for it by get_partition_id(). For LIMIT history it will just get first non-filled partition. To fix such partitions it is required to do REBUILD instead of REPAIR.
-
Aleksey Midenkov authored
When view is merged by DT_MERGE_FOR_INSERT it is then skipped from processing and doesn't update WHERE clause with vers_setup_conds(). Note that view itself cannot work in vers_setup_conds() because it doesn't have row_start, row_end fields. Thus it is required to descend down to material TABLE_LIST through calls of mysql_derived_prepare() and run vers_setup_conds() from there. Luckily, all views (views of views, views of views of views, etc.) are linked in one list through next_global pointer, so we can skip all views of views and get straight to non-view TABLE_LIST by checking its merge_underlying_list property for zero value (it is assigned by DT_MERGE_FOR_INSERT for merged derived tables). We have to do that only for UPDATE and DELETE. Other DML commands don't use WHERE clause. MDEV-21146 Assertion `m_lock_type == 2' in handler::ha_drop_table upon LOAD DATA LOAD DATA does not use WHERE and the above call of vers_setup_conds() is not needed. unit->prepare() led to wrongly locked temporary table.
-
Aleksey Midenkov authored
"write set" for replication finally got its correct place (mark_columns_per_binlog_row_image()). When done generally in mark_columns_needed_for_update() it affects optimization algorithm. used_key_is_modified, query_plan.using_io_buffer are wrongly set and that leads to wrong prepare_for_keyread() which limits read_set.
-
Aleksey Midenkov authored
Turn read cache off for update and multi-update for versioned table. no_cache is reinited on each TABLE open because it is applicable for specific algorithms. As a side fix vers_insert_history_row() honors vers_write setting. Aria with row_format=fixed uses IO_CACHE of type READ_CACHE for sequential read in update loop. When history row is inserted inside this loop the cache misses it and fails with error. TODO: Currently maria_extra() does not support SEQ_READ_APPEND. Probably it might be possible to use this type of cache.
-
- 30 Nov, 2019 1 commit
-
-
Jan Lindström authored
galera_2nodes.cnf did not contain wsrep_on=1 on correct places. Fixed restart options to use correct configuration.
-
- 29 Nov, 2019 4 commits
-
-
HF authored
orig_test_id should be set properly. Also fixed sporadic test failure.
-
Vlad Lesin authored
executing undo undo_key_delete" upon startup on datadir restored from incremental backup aria_log* files were not copied on --prepare --incremental-dir step from incremental to destination backup directory.
-
Sergei Golubchik authored
generalize the replacement
-
Sergei Golubchik authored
This reverts commit 0d345ec2. Upgrades from 8.0 don't work yet, one has to dump/restore manually to get the metadata out of the data dictionary.
-
- 28 Nov, 2019 4 commits
-
-
Vladislav Vaintroub authored
-
Sergei Golubchik authored
mariadb packages conflict with mysql-8.0
-
Sergei Golubchik authored
Obsoletes: cannot contain (x86-64) anymore Python shebang must be specific
-
Vladislav Vaintroub authored
Use my_thread_var::stack_ends_here inside lf_pinbox_real_free() for address where thread stack ends. Remove LF_PINS::stack_ends_here. It is not safe to assume that mysys_var that was used during pin allocation, remains correct during free. E.g with binlog group commit in Innodb, that frees pins for multiple Innodb transactions, it does not work correctly.
-
- 27 Nov, 2019 4 commits
-
-
Vladislav Vaintroub authored
Prior to this fix, when matching addresses using mask, extra bits could be used for comparison, e.g to match with "a.b.c.d/24" , 27 bits were compared rather than 24. The patch fixes the calculation.
-
Marko Mäkelä authored
As part of commit 3c09f148 trx_undo_commit_cleanup() was always invoked with noredo=true. The impact of this should be that some undo log pages may not be correctly freed if the server is killed and crash recovery will be performed. Similarly, if mariabackup --backup is being executed concurrently with user transaction commits, it could happen that some undo log pages in the backup will never be marked as free for reuse. It seems that this bug should not have any user-visible impact other than some undo pages being wasted.
-
Alexey Botchkov authored
The thread_id of the INSERT DELAYED thread should not be set to 0.
-
Alexey Botchkov authored
Add notifications about the user and connection that actually did the DELAYED insert.
-
- 26 Nov, 2019 2 commits
-
-
Eugene Kosov authored
-
Marko Mäkelä authored
As noted in commit abd45cdc a search with PAGE_CUR_GE may land on the supremum record on a leaf page that is not the rightmost leaf page. This could occur when all keys on the current page are smaller than the search key, and the smallest key on the successor page is larger than the search key. Hence, after a failed PAGE_CUR_GE search, assertions btr_pcur_is_after_last_in_tree() are bogus and should be replaced with btr_pcur_is_after_last_on_page().
-
- 25 Nov, 2019 2 commits
-
-
willhan authored
fix bug for spider where using "not like" (#890) test case: t1 is a spider engine table; CREATE TABLE `t1` ( `id` int(11) NOT NULL DEFAULT '0', `name` char(64) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=SPIDER query: "select * from t1 where name not like 'x%' " would dispatch "select xxx name name like 'x%' " to remote mysqld, is wrong
-
Aleksey Midenkov authored
-
- 22 Nov, 2019 2 commits
-
-
Aleksey Midenkov authored
Use my_localhost instead of NULL for share->hostname.
-
Aleksey Midenkov authored
MDEV-18957 UPDATE with LIMIT clause is wrong for versioned partitioned tables UPDATE, DELETE: replace linear search of current/historical records with vers_setup_conds(). Additional DML cases in view.test
-
- 21 Nov, 2019 2 commits
-
-
Eugene Kosov authored
Replace all io_context* occurrences with io_context_t Even in release mode die immediately when some io_* functions return EINVAL. This always means some programming bug and it's better to fail fast. LinuxAIOHandler::resubmit(): fix condition. Stop ignoring -1 return code which corresponds to EPERM and io_submit() really can return this one. Use io_destroy() to stop leaking io_context_t. Make m_aio_ctx std::vector instead of C array. I think that internal check for index overflow might be useful. Add debug assertions for EFAULT because for me receiving it looks like a programming bug.
-
Eugene Kosov authored
row_log_table_get_pk_col(): read instant field value from instant alter table when it's required.
-
- 20 Nov, 2019 4 commits
-
-
Eugene Kosov authored
MDEV-20832 Don't print "row size too large" warnings in error log if innodb_strict_mode=OFF and log_warnings<=2 create_table_info_t::row_size_is_acceptable(): add condition for log writing
-
Vlad Lesin authored
-
Marko Mäkelä authored
For ROW_FORMAT=REDUNDANT, we must reserve fixed-length dummy values for the CHAR columns in the metadata record. This is because in MariaDB Server 10.4, btr_cur_instant_init_low() will rely on dict_index_t::trx_id_offset being accurate for the metadata record.
-
Marko Mäkelä authored
In MariaDB Server 10.4, btr_cur_instant_init_low() assumes that all PRIMARY KEY columns that are internally variable-length will be encoded in 0 bytes in the metadata record. Sometimes, CHAR columns can be encoded as variable-length. We should not unnecessarily reserve space for a dummy string value in the metadata record.
-
- 19 Nov, 2019 2 commits
-
-
Alexey Botchkov authored
Do not fail fi all the partitions were pruned out.
-
Vlad Lesin authored
The fix consists of three commits backported from 10.3: 1) Cleanup isnan() portability checks (cherry picked from commit 7ffd7fe9) 2) Cleanup isinf() portability checks Original problem reported by Wlad: re-compilation of 10.3 on top of 10.2 build would cache undefined HAVE_ISINF from 10.2, whereas it is expected to be 1 in 10.3. std::isinf() seem to be available on all supported platforms. (cherry picked from commit bc469a0b) 3) Use std::isfinite in C++ code This is addition to parent revision fixing build failures. (cherry picked from commit 54999f4e)
-
- 18 Nov, 2019 1 commit
-
-
Marko Mäkelä authored
DropIndex, CreateIndex: Remove. The file row0trunc.cc only exists in MariaDB Server 10.3 so that the crash recovery of TRUNCATE TABLE operations from older 10.2 and 10.3 servers will work. This dead code was being used for implementing the MySQL 5.7 WL#6501 TRUNCATE TABLE that was replaced with a backup-safe implementation in MDEV-13564.
-