- 11 Jan, 2018 3 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
The warning was originally added in commit c6766305 (MySQL 4.1.12, 5.0.3) to trace claimed undo log corruption that was analyzed in https://lists.mysql.com/mysql/176250 on November 9, 2004. Originally, the limit was 20,000 undo log headers or transactions, but in commit 9d6d1902 in MySQL 5.5.11 it was increased to 2,000,000. The message can be triggered when the progress of purge is prevented by a long-running transaction (or just an idle transaction whose read view was started a long time ago), by running many transactions that UPDATE or DELETE some records, then starting another transaction with a read view, and finally by executing more than 2,000,000 transactions that UPDATE or DELETE records in InnoDB tables. Finally, when the oldest long-running transaction is completed, purge would run up to the next-oldest transaction, and there would still be more than 2,000,000 transactions to purge. Because the message can be triggered when the database is obviously not corrupted, it should be removed. Heavy users of InnoDB should be monitoring the "History list length" in SHOW ENGINE INNODB STATUS; there is no need to spam the error log.
-
- 10 Jan, 2018 3 commits
-
-
Oleksandr Byelkin authored
Roll back to most general duplicate removing strategi in case of different stratagies for one position.
-
Marko Mäkelä authored
Backport the fix from 10.0.33 to 5.5, in case someone compiles XtraDB with -DUNIV_LOG_ARCHIVE
-
Marko Mäkelä authored
The XtraDB option innodb_track_changed_pages causes the function log_group_read_log_seg() to be invoked even when recv_sys==NULL, leading to the SIGSEGV. This regression was caused by MDEV-11027 InnoDB log recovery is too noisy
-
- 09 Jan, 2018 1 commit
-
-
Jan Lindström authored
innodb/buf_LRU_get_free_block Add debug instrumentation to produce error message about no free pages. Print error message only once and do not enable innodb monitor. xtradb/buf_LRU_get_free_block Add debug instrumentation to produce error message about no free pages. Print error message only once and do not enable innodb monitor. Remove code that does not seem to be used. innodb-lru-force-no-free-page.test New test case to force produce desired error message.
-
- 08 Jan, 2018 4 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
dict_foreign_find_index(): Ignore incompletely created indexes. After a failed ADD UNIQUE INDEX, an incompletely created index could be left behind until the next ALTER TABLE statement.
-
Marko Mäkelä authored
This bug affects both writing and reading encrypted redo log in MariaDB 10.1, starting from version 10.1.3 which added support for innodb_encrypt_log. That is, InnoDB crash recovery and Mariabackup will sometimes fail when innodb_encrypt_log is used. MariaDB 10.2 or Mariabackup 10.2 or later versions are not affected. log_block_get_start_lsn(): Remove. This function would cause trouble if a log segment that is being read is crossing a 32-bit boundary of the LSN, because this function does not allow the most significant 32 bits of the LSN to change. log_blocks_crypt(), log_encrypt_before_write(), log_decrypt_after_read(): Add the parameter "lsn" for the start LSN of the block. log_blocks_encrypt(): Remove (unused function).
-
- 05 Jan, 2018 2 commits
-
-
Vladislav Vaintroub authored
2cd31691 broke conf_to_src, because strings library is now dependend on mysys (my_alloc etc are used now directly in string lib) Fix by adding appropriate dependency. Also exclude conf_to_src from VS IDE builds. EXCLUDE_FROM_ALL is not enough for that.
-
Aleksey Midenkov authored
debug_key_management encrypt_and_grep innodb_encryption If real table count is different from what is expected by the test, it just hangs on waiting to fulfill hardcoded number. And then exits with **failed** after 10 minutes of wait: quite unfriendly and hard to figure out what's going on.
-
- 04 Jan, 2018 2 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
- 03 Jan, 2018 3 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
- 02 Jan, 2018 4 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
trx_undo_rec_get_partial_row(): When the PRIMARY KEY includes a column prefix of an externally stored column, the already parsed part of the undo log record may contain a reference to an off-page column. This is the case in the bug58912 test in innodb.innodb.
-
Marko Mäkelä authored
This is a regression caused by MDEV-14051 'Undo log record is too big.' Purge in the secondary index is wrongly skipped in row_purge_upd_exist_or_extern() because node->row only does not contain all indexed columns. trx_undo_rec_get_partial_row(): Add the parameter for node->update so that the updated columns will be copied from the initial part of the undo log record.
-
- 28 Dec, 2017 2 commits
-
-
Ian Gilfillan authored
-
Sergei Golubchik authored
cherry-pick e6ce97a5
-
- 27 Dec, 2017 3 commits
-
-
Sergei Golubchik authored
* don't use Env module in tests, use $ENV{xxx} instead * collateral changes: ** $file in the error message was unset ** $file in the other error message was unset too :) ** source file arguments are conventionally upper-cased ** abort the test (die) on error, don't just echo/exit
-
Vicențiu Ciorbaru authored
-
Oleksandr Byelkin authored
If translation table present when we materialize the derived table then change it to point to the materialized table. Added debug info to see really what happens with what derived.
-
- 25 Dec, 2017 4 commits
-
-
Sergei Golubchik authored
MDEV-14026 ALTER TABLE ... DELAY_KEY_WRITE=1 creates table copy for partitioned MyISAM table with DATA DIRECTORY/INDEX DIRECTORY options set data_file_name and index_file_name in HA_CREATE_INFO before calling check_if_incompatible_data()
-
Sergei Golubchik authored
-
Sergei Golubchik authored
don't allocate them on THD::mem_root on every init(HA_STATUS_CONST) call, do it once in open() (because they don't change) on TABLE::mem_root (so they stay valid until the table is closed)
-
Sachin Setiya authored
Problem:- Gtid are not transferred in Galera Cluster. Solution:- We need to transfer gtid in the case on either when cluster is slave/master in async replication. In normal Gtid replication gtid are generated on recieving node itself and it is always on sync with other nodes. Because galera keeps node in sync , So all nodes get same no of event groups. So the issue arises when say galera is slave in async replication. A | (Async replication) D <-> E <-> F {Galera replication} So what should happen is that all node should apply the master gtid but this does node happen, becuase node E, F does not recieve gtid from D in write set , So what E(or F) does is that it applies wsrep_gtid_domain_id, D server-id , E gtid next seq no. This generated gtid does not always work when say A has different domain id. So In this commit, on galera node when we see that this event is recieved from master we simply write Gtid_Log_Event in write_set and send it to other nodes.
-
- 22 Dec, 2017 2 commits
-
-
Daniel Bartholomew authored
-
Sergey Vojtovich authored
Coverage for temporary tables modifications in read-only transactions. Introduced in 5.7 by 325cdf426
-
- 21 Dec, 2017 6 commits
-
-
Vicențiu Ciorbaru authored
A suggestion to make role propagation simpler from serg@mariadb.org. Instead of gathering the leaf roles in an array, which for very wide graphs could potentially mean a big part of the whole roles schema, keep the previous logic. When finally merging a role, set its counter to something positive. This will effectively mean that a role has been merged, thus a random pass through roles hash that touches a previously merged role won't cause the problem described in MDEV-12366 any more, as propagate_role_grants_action will stop attempting to merge from that role.
-
Marko Mäkelä authored
Sometimes, the test would fail with a result difference for the READ UNCOMMITTED read, because the incremental backup would finish before redo log was written for all the rows that were inserted in the second batch. To fix that, cause a redo log write by creating another transaction. The transaction rollback (which internally does commit) will be flushed to the redo log, and before that, all the preceding changes will be flushed to the redo log as well.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
row_log_table_apply_insert_low(), row_log_table_apply_update(): When reporting the error_key_num, only count the clustered index if it corresponds to a key in the SQL layer. The assertion failure was probably introduced by the (incomplete) MySQL 5.6.28 bug fix Bug #21364096 THE BOGUS DUPLICATE KEY ERROR IN ONLINE DDL WITH INCORRECT KEY NAME which we are improving. Side note: the fix was incorrectly merged to MySQL 5.7.10; incorrect key names will continue to be reported in MySQL 5.7.
-
Marko Mäkelä authored
These assertions were disabled in MariaDB 10.1.1 in commit df4dd593 with a bogus comment referring to the function wsrep_fake_trx_id() that was introduced in the very same commit.
-
Elena Stepanova authored
-
- 20 Dec, 2017 1 commit
-
-
Sachin Setiya authored
Problem: The command was: find $paths -mindepth 1 -regex $cpat -prune -o -exec rm -rf {} \+ Which was supposed to work as * skipping $paths directories themselves (-mindepth 1) * see if the dir/file name matches $cpat (-regex) * if yes - don't dive into the directory, skip it (-prune) * otherwise (-o) * remove it and everything inside (-exec) Now -exec ... \+ works like this: every new found path is appended to the end of the command line. when accumulated command line length reaches `getconf ARG_MAX` (~2Gb) it's executed, and find continues, appending to a new command line. What happens here, find appends some directory to the command line, then dives into it, and starts appending files from that directory. At some point command line overflows, rm -rf gets executed and removes the whole directory. Now find tries to continue scanning the directory that was already removed. Fix: don't dive into directories that will be recursively removed anyway, use -prune for them. Basically, we should be pruning both paths that have matched $cpat and paths that have not matched it. This is achived by pruning unconditionally, before the regex is tested: find $paths -mindepth 1 -prune -regex $cpat -o -exec rm -rf {} \+ Patch Credit:- Serg
-