- 22 Jan, 2018 1 commit
-
-
Marko Mäkelä authored
InnoDB is issuing a 'noise' message that is not a sign of abnormal operation. The only issuers of it are the debug function lock_rec_block_validate() and the change buffer merge. While the error should ideally never occur in transactional locking, we happen to know that DISCARD TABLESPACE and TRUNCATE TABLE and possibly DROP TABLE are breaking InnoDB table locks. When it comes to the change buffer merge, the message simply is useless noise. We know perfectly well that a tablespace can be dropped while a change buffer merge is pending. And the code is prepared to handle that, which is demonstrated by the fact that whenever the message was issued, InnoDB did not crash. fil_inc_pending_ops(): Remove the parameter print_err.
-
- 18 Jan, 2018 1 commit
-
-
Marko Mäkelä authored
-
- 11 Jan, 2018 2 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
The warning was originally added in commit c6766305 (MySQL 4.1.12, 5.0.3) to trace claimed undo log corruption that was analyzed in https://lists.mysql.com/mysql/176250 on November 9, 2004. Originally, the limit was 20,000 undo log headers or transactions, but in commit 9d6d1902 in MySQL 5.5.11 it was increased to 2,000,000. The message can be triggered when the progress of purge is prevented by a long-running transaction (or just an idle transaction whose read view was started a long time ago), by running many transactions that UPDATE or DELETE some records, then starting another transaction with a read view, and finally by executing more than 2,000,000 transactions that UPDATE or DELETE records in InnoDB tables. Finally, when the oldest long-running transaction is completed, purge would run up to the next-oldest transaction, and there would still be more than 2,000,000 transactions to purge. Because the message can be triggered when the database is obviously not corrupted, it should be removed. Heavy users of InnoDB should be monitoring the "History list length" in SHOW ENGINE INNODB STATUS; there is no need to spam the error log.
-
- 10 Jan, 2018 3 commits
-
-
Oleksandr Byelkin authored
Roll back to most general duplicate removing strategi in case of different stratagies for one position.
-
Marko Mäkelä authored
Backport the fix from 10.0.33 to 5.5, in case someone compiles XtraDB with -DUNIV_LOG_ARCHIVE
-
Marko Mäkelä authored
The XtraDB option innodb_track_changed_pages causes the function log_group_read_log_seg() to be invoked even when recv_sys==NULL, leading to the SIGSEGV. This regression was caused by MDEV-11027 InnoDB log recovery is too noisy
-
- 08 Jan, 2018 1 commit
-
-
Marko Mäkelä authored
dict_foreign_find_index(): Ignore incompletely created indexes. After a failed ADD UNIQUE INDEX, an incompletely created index could be left behind until the next ALTER TABLE statement.
-
- 04 Jan, 2018 1 commit
-
-
Marko Mäkelä authored
-
- 03 Jan, 2018 2 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
- 02 Jan, 2018 3 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
trx_undo_rec_get_partial_row(): When the PRIMARY KEY includes a column prefix of an externally stored column, the already parsed part of the undo log record may contain a reference to an off-page column. This is the case in the bug58912 test in innodb.innodb.
-
Marko Mäkelä authored
This is a regression caused by MDEV-14051 'Undo log record is too big.' Purge in the secondary index is wrongly skipped in row_purge_upd_exist_or_extern() because node->row only does not contain all indexed columns. trx_undo_rec_get_partial_row(): Add the parameter for node->update so that the updated columns will be copied from the initial part of the undo log record.
-
- 28 Dec, 2017 1 commit
-
-
Ian Gilfillan authored
-
- 27 Dec, 2017 2 commits
-
-
Sergei Golubchik authored
* don't use Env module in tests, use $ENV{xxx} instead * collateral changes: ** $file in the error message was unset ** $file in the other error message was unset too :) ** source file arguments are conventionally upper-cased ** abort the test (die) on error, don't just echo/exit
-
Oleksandr Byelkin authored
If translation table present when we materialize the derived table then change it to point to the materialized table. Added debug info to see really what happens with what derived.
-
- 22 Dec, 2017 1 commit
-
-
Sergey Vojtovich authored
Coverage for temporary tables modifications in read-only transactions. Introduced in 5.7 by 325cdf426
-
- 21 Dec, 2017 2 commits
-
-
Vicențiu Ciorbaru authored
A suggestion to make role propagation simpler from serg@mariadb.org. Instead of gathering the leaf roles in an array, which for very wide graphs could potentially mean a big part of the whole roles schema, keep the previous logic. When finally merging a role, set its counter to something positive. This will effectively mean that a role has been merged, thus a random pass through roles hash that touches a previously merged role won't cause the problem described in MDEV-12366 any more, as propagate_role_grants_action will stop attempting to merge from that role.
-
Marko Mäkelä authored
row_log_table_apply_insert_low(), row_log_table_apply_update(): When reporting the error_key_num, only count the clustered index if it corresponds to a key in the SQL layer. The assertion failure was probably introduced by the (incomplete) MySQL 5.6.28 bug fix Bug #21364096 THE BOGUS DUPLICATE KEY ERROR IN ONLINE DDL WITH INCORRECT KEY NAME which we are improving. Side note: the fix was incorrectly merged to MySQL 5.7.10; incorrect key names will continue to be reported in MySQL 5.7.
-
- 20 Dec, 2017 4 commits
-
-
Vicențiu Ciorbaru authored
-
Varun Gupta authored
In the function make_sortkey a tmp buffer was defined and in the absence of param->tmp_buffer, tmp buffer used the sort_keys buffer. sort_keys buffer has a length defined in sort_field->length, while param->tmp_buffer is stored in param->rec_length. Make sure to use the appropriate length based on which buffer we are using otherwise we'll overflow. Also added a type cast to size_t during the calculation of the sort keys buffer size to avoid an oveflow if the buffer size exceeds 32 bits.
-
Alexander Barkov authored
An after-fix for MDEV-14008 Assertion failing: `!is_set() || (m_status == DA_OK_BULK && is_bulk_op()) Fixing an additional failure discovered after a merge to 10.2
-
Marko Mäkelä authored
The comment became stale in commit 9f57e595 which removed the parameter "flags".
-
- 19 Dec, 2017 3 commits
-
-
Simon J Mudd authored
-
Vicențiu Ciorbaru authored
Whenever we call merge_role_privileges on a role, we make use of the role->counter variable to check if all it's children have had their privileges merged. Only if all children have had their privileges merged, do we update the privileges on parent. This is done to prevent extra work. The same idea is employed during flush privileges. You only begin merging from "leaf" roles. The recursive calls will merge their parents at some point. A problem arises when we try to "re-merge" a parent. Take the following graph: {noformat} A (0) ---- C (2) ---- D (2) ---- USER / / B (0) ----/ / / E (0) --------------/ {noformat} In parentheses we have the "counter" value right before we start to iterate through the roles hash and propagate values. It represents the number of roles granted to the current role. The order in which we iterate through the roles hash is alphabetical. * First merge A, which leads to decreasing the counter for C to 1. Since C is not 0, we don't proceed with merging into C. * Second we merge B, which leads to decreasing the counter for C to 0. Now we proceed with merging into C. This leads to reducing the counter for D to 1 as part of C merge process. * Third as we iterate through the hash, we see that C has counter 0, thus we start the merge process *again*. This leads to reducing the counter for D to 0! We then attempt to merge D. * Fourth we start merging E. When E sees D as it's parent (according to the code) it attempts to reduce D's counter, which leads to overflow. Now D's counter is a very large number, thus E's privileges are not forwarded to D yet. To correct this behavior we must make sure to only start merging from initial leaf nodes.
-
Vicențiu Ciorbaru authored
When granting a role to another role, DB privileges get propagated. If the grantee had no previous DB privileges, an extra ACL_DB entry is created to house those "indirectly received" privileges. If, afterwards, DB privileges are granted to the grantee directly, we must make sure to not create a duplicate ACL_DB entry.
-
- 18 Dec, 2017 3 commits
-
-
Marko Mäkelä authored
The InnoDB background DROP TABLE queue is something that we should really remove, but are unable to until we remove dict_operation_lock so that DDL and DML operations can be combined in a single transaction. Because the queue is not persistent, it is not crash-safe. In stable versions of MariaDB, we can only try harder to drop all enqueued tables before server shutdown. row_mysql_drop_t::table_id: Replaces table_name. row_drop_tables_for_mysql_in_background(): Do not remove the entry from the list as long as the table exists. In this way, the table should eventually be dropped.
-
Sergei Golubchik authored
MDEV-14641 Incompatible key or row definition between the MariaDB .frm file and the information in the storage engine make sure that mysql_create_frm_image() and fast_alter_partition_table() use the same code to derive HA_OPTION_PACK_RECORD from create_info->row_type.
-
Alexander Barkov authored
-
- 16 Dec, 2017 1 commit
-
-
Oleksandr Byelkin authored
Correctly print separator string in single quotes.
-
- 13 Dec, 2017 5 commits
-
-
Marko Mäkelä authored
trx_rollback_active(): When aborting the rollback, free the query graph.
-
Marko Mäkelä authored
MDEV-12323 Rollback progress log messages during crash recovery are intermixed with unrelated log messages trx_roll_must_shutdown(): During the rollback of recovered transactions, report progress and check if the rollback should be interrupted because of a pending shutdown. trx_roll_max_undo_no, trx_roll_progress_printed_pct: Remove, along with the messages that were interleaved with other messages.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
row_undo_step(), trx_rollback_active(): Abort the rollback of a recovered ordinary transaction if fast shutdown has been initiated. trx_rollback_resurrected(): Convert an aborted-rollback transaction into a fake XA PREPARE transaction, so that fast shutdown can proceed.
-
Marko Mäkelä authored
MDEV-13797 InnoDB may hang if shutdown is initiated soon after startup while rolling back recovered incomplete transactions trx_rollback_resurrected(): If shutdown was initiated, fake all remaining active transactions to XA PREPARE state, so that shutdown can proceed. Also, make the parameter "all" an output that will be assigned to FALSE in this case. trx_rollback_or_clean_recovered(): Remove the shutdown check (it was moved to trx_rollback_resurrected()). trx_undo_free_prepared(): Relax assertions.
-
- 08 Dec, 2017 1 commit
-
-
Alexander Barkov authored
-
- 06 Dec, 2017 1 commit
-
-
Vicențiu Ciorbaru authored
This reverts commit 7603463a. The commit itself is fine, however when disabling volatile, compiler optimizations mess up our double results due to precision differences. Revert the patch till a proper solution is found.
-
- 05 Dec, 2017 1 commit
-
-
Daniel Black authored
This was added in c7964159 but would hurt all other compilers because of Visual Studio. Hopefully this has been fixed now. Signed-off-by: Daniel Black <daniel@linux.vnet.ibm.com>
-
- 30 Nov, 2017 1 commit
-
-
Varun Gupta authored
For BIT field null_bit is not set to 0 even for a field defined as NOT NULL. So now in the function TABLE::create_key_part_by_field, if the bit field is not nullable then the null_bit is explicitly set to 0
-