- 08 Jul, 2014 1 commit
-
-
Kristian Nielsen authored
MDEV-5262, MDEV-5914, MDEV-5941, MDEV-6020: Deadlocks during parallel replication causing replication to fail. After-review changes. For this patch in 10.0, we do not introduce a new public storage engine API, we just fix the InnoDB/XtraDB issues. In 10.1, we will make a better public API that can be used for all storage engines (MDEV-6429). Eliminate the background thread that did deadlock kills asynchroneously. Instead, we ensure that the InnoDB/XtraDB code can handle doing the kill from inside the deadlock detection code (when thd_report_wait_for() needs to kill a later thread to resolve a deadlock). (We preserve the part of the original patch that introduces dedicated mutex and condition for the slave init thread, to remove the abuse of LOCK_thread_count for start/stop synchronisation of the slave init thread).
-
- 04 Jul, 2014 1 commit
-
-
Kristian Nielsen authored
-
- 10 Jun, 2014 1 commit
-
-
unknown authored
replication causing replication to fail. Remove the temporary fix for MDEV-5914, which used READ COMMITTED for parallel replication worker threads. Replace it with a better, more selective solution. The issue is with certain edge cases of InnoDB gap locks, for example between INSERT and ranged DELETE. It is possible for the gap lock set by the DELETE to block the INSERT, if the DELETE runs first, while the record lock set by INSERT does not block the DELETE, if the INSERT runs first. This can cause a conflict between the two in parallel replication on the slave even though they ran without conflicts on the master. With this patch, InnoDB will ask the server layer about the two involved transactions before blocking on a gap lock. If the server layer tells InnoDB that the transactions are already fixed wrt. commit order, as they are in parallel replication, InnoDB will ignore the gap lock and allow the two transactions to proceed in parallel, avoiding the conflict. Improve the fix for MDEV-6020. When InnoDB itself detects a deadlock, it now asks the server layer for any preferences about which transaction to roll back. In case of parallel replication with two transactions T1 and T2 fixed to commit T1 before T2, the server layer will ask InnoDB to roll back T2 as the deadlock victim, not T1. This helps in some cases to avoid excessive deadlock rollback, as T2 will in any case need to wait for T1 to complete before it can itself commit. Also some misc. fixes found during development and testing: - Remove thd_rpl_is_parallel(), it is not used or needed. - Use KILL_CONNECTION instead of KILL_QUERY when a parallel replication worker thread is killed to resolve a deadlock with fixed commit ordering. There are some cases, eg. in sql/sql_parse.cc, where a KILL_QUERY can be ignored if the query otherwise completed successfully, and this could cause the deadlock kill to be lost, so that the deadlock was not correctly resolved. - Fix random test failure due to missing wait_for_binlog_checkpoint.inc. - Make sure that deadlock or other temporary errors during parallel replication are not printed to the the error log; there were some places around the replication code with extra error logging. These conditions can occur occasionally and are handled automatically without breaking replication, so they should not pollute the error log. - Fix handling of rgi->gtid_sub_id. We need to be able to access this also at the end of a transaction, to be able to detect and resolve deadlocks due to commit ordering. But this value was also used as a flag to mark whether record_gtid() had been called, by being set to zero, losing the value. Now, introduce a separate flag rgi->gtid_pending, so rgi->gtid_sub_id remains valid for the entire duration of the transaction. - Fix one place where the code to handle ignored errors called reset_killed() unconditionally, even if no error was caught that should be ignored. This could cause loss of a deadlock kill signal, breaking deadlock detection and resolution. - Fix a couple of missing mysql_reset_thd_for_next_command(). This could cause a prior error condition to remain for the next event executed, causing assertions about errors already being set and possibly giving incorrect error handling for following event executions. - Fix code that cleared thd->rgi_slave in the parallel replication worker threads after each event execution; this caused the deadlock detection and handling code to not be able to correctly process the associated transactions as belonging to replication worker threads. - Remove useless error code in slave_background_kill_request(). - Fix bug where wfc->wakeup_error was not cleared at wait_for_commit::unregister_wait_for_prior_commit(). This could cause the error condition to wrongly propagate to a later wait_for_prior_commit(), causing spurious ER_PRIOR_COMMIT_FAILED errors. - Do not put the binlog background thread into the processlist. It causes too many result differences in mtr, but also it probably is not useful for users to pollute the process list with a system thread that does not really perform any user-visible tasks...
-
- 03 Jun, 2014 1 commit
-
-
unknown authored
replication causing replication to fail. In parallel replication, we run transactions from the master in parallel, but force them to commit in the same order they did on the master. If we force T1 to commit before T2, but T2 holds eg. a row lock that is needed by T1, we get a deadlock when T2 waits until T1 has committed. Usually, we do not run T1 and T2 in parallel if there is a chance that they can have conflicting locks like this, but there are certain edge cases where it can occasionally happen (eg. MDEV-5914, MDEV-5941, MDEV-6020). The bug was that this would cause replication to hang, eventually getting a lock timeout and causing the slave to stop with error. With this patch, InnoDB will report back to the upper layer whenever a transactions T1 is about to do a lock wait on T2. If T1 and T2 are parallel replication transactions, and T2 needs to commit later than T1, we can thus detect the deadlock; we then kill T2, setting a flag that causes it to catch the kill and convert it to a deadlock error; this error will then cause T2 to roll back and release its locks (so that T1 can commit), and later T2 will be re-tried and eventually also committed. The kill happens asynchroneously in a slave background thread; this is necessary, as the reporting from InnoDB about lock waits happen deep inside the locking code, at a point where it is not possible to directly call THD::awake() due to mutexes held. Deadlock is assumed to be (very) rarely occuring, so this patch tries to minimise the performance impact on the normal case where no deadlocks occur, rather than optimise the handling of the occasional deadlock. Also fix transaction retry due to deadlock when it happens after a transaction already signalled to later transactions that it started to commit. In this case we need to undo this signalling (and later redo it when we commit again during retry), so following transactions will not start too early. Also add a missing thd->send_kill_message() that got triggered during testing (this corrects an incorrect fix for MySQL Bug#58933).
-
- 15 May, 2014 1 commit
-
-
unknown authored
Handle retry of event groups that span multiple relay log files. - If retry reaches the end of one relay log file, move on to the next. - Handle refcounting of relay log files, and avoid purging relay log files until all event groups have completed that might have needed them for transaction retry.
-
- 13 May, 2014 1 commit
-
-
unknown authored
Implement that if first retry fails, we can do another attempt. Add testcases to test multi-retry that succeeds in second attempt, and multi-retry that eventually fails due to exceeding slave_trans_retries.
-
- 08 May, 2014 1 commit
-
-
unknown authored
Start implementing that an event group can be re-tried in parallel replication if it fails with a temporary error (like deadlock). Patch is very incomplete, just some very basic retry works. Stuff still missing (not complete list): - Handle moving to the next relay log file, if event group to be retried spans multiple relay log files. - Handle refcounting of relay log files, to ensure that we do not purge a relay log file and then later attempt to re-execute events out of it. - Handle description_event_for_exec - we need to save this somehow for the possible retry - and use the correct one in case it differs between relay logs. - Do another retry attempt in case the first retry also fails. - Limit the max number of retries. - Lots of testing will be needed for the various edge cases.
-
- 07 Jul, 2014 1 commit
-
-
Kristian Nielsen authored
Follow-up patch. The original patch added an extra argument to the rli->report() function, however it was forgotten to adjust the calls accordingly in a few places. This patch updates the remaining calls as needed. In files log_event_old.cc and rpl_record_old.cc, it just adds NULL, since this is only for old event formats from ancient master servers, which would not have any GTID information to add to the error messages in any case.
-
- 04 Jul, 2014 2 commits
-
-
Jan Lindström authored
than with InnoDB plugin Fix: os0file.h in XtraDB had OS_AIO_N_PENDING_IOS_PER_THREAD 256 when on InnoDB it is OS_AIO_N_PENDING_IOS_PER_THREAD 32. Changed XtraDB also to use 32.
-
Jan Lindström authored
then can't ALTER TABLE any more. Fix for InnoDB storage engine.
-
- 03 Jul, 2014 1 commit
-
-
Jan Lindström authored
ALTER TABLE any more.
-
- 30 Jun, 2014 2 commits
-
-
Alexey Botchkov authored
Tests were merged. As the implementation is different, the 'internal debugging' part was not merged, only a stub for it created.
-
Kristian Nielsen authored
These tests use search_pattern_in_file.inc to search the error log for expected output. However, search_pattern_in_file.inc by default searched only the first 50000 bytes, so if the error log grew too big the tests would fail. This patch extends search_pattern_in_file.inc with an option to specify how much of the file to search, and whether to search from the start of the file or from the end. Then the rpl.rpl_checksum and rpl.rpl_gtid_errorlog test cases are fixed to search the last 50000 bytes of the error log, which will work no matter how large prior tests have made it.
-
- 27 Jun, 2014 2 commits
-
-
Kristian Nielsen authored
MDEV-6386: Assertion `thd->transaction.stmt.is_empty() || thd->in_sub_stmt || (thd->state_flags & Open_tables_state::BACKUPS_AVAIL)' fails with parallel replication The direct cause of the assertion was missing error handling in record_gtid(). If ha_commit_trans() fails for the statement commit, there was missing code to catch the error and do ha_rollback_trans() in this case; this caused close_thread_tables() to assert. Normally, this error case is not hit, but in this case it was triggered due to another bug: When a transaction T1 fails during parallel replication, the code would signal following transactions that they could start to run without properly marking the error condition. This caused subsequent transactions to incorrectly start replicating, only to get an error later during their own commit step. This was particularly serious if the subsequent transactions were DDL or MyISAM updates, which cannot be rolled back and would leave replication in an inconsistent state. Fixed by 1) in case of error, only signal following transactions to continue once the error has been properly marked and those transactions will know not to start; and 2) implement proper error handling in record_gtid() in the case that statement commit fails.
-
Sergei Golubchik authored
Use user's ip address when verifying privileges for SET ROLE (just like check_access() does)
-
- 25 Jun, 2014 2 commits
-
-
Kristian Nielsen authored
If replication breaks in GTID mode, it is not trivial to determine the GTID of the failing event group. This is a problem, as such GTID is needed eg. to explicitly set @@gtid_slave_pos to skip to after that event group, or to compare errors on different servers, etc. Fix by ensuring that relevant slave errors logged to the error log include the GTID of the event group containing the problem event.
-
Kristian Nielsen authored
This is MySQL Bug#59123. The message string stored in an INCIDENT event was not zero-terminated. This caused any following checksum bytes (if enabled on the master) to be output to the error log as trailing garbage when the message was printed to the error log. Backport the patch from MySQL 5.6: revno: 2876.228.200 revision-id: zhenxing.he@sun.com-20110111051323-w2xnzvcjn46x6h6u committer: He Zhenxing <zhenxing.he@sun.com> timestamp: Tue 2011-01-11 13:13:23 +0800 message: BUG#59123 rpl_stm_binlog_max_cache_size fails sporadically with found warnings Also add a test case.
-
- 24 Jun, 2014 3 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Kristian Nielsen authored
MySQL 5.6 implemented WL#344, which is about a MASTER_DELAY option to CHANGE MASTER. But as part of this worklog, the format of the realy-log.info file was changed. The new format is not understood by earlier versions, and nor by MariaDB 10.0, so changing server to those versions would cause the slave to abort with an error due to reading incorrect data out of relay-log.info. Fix this by backporting from the WL#344 patch just the code that understands the new relay-log.info format. We still write out the old format, and none of the MASTER_DELAY feature is backported with this commit.
-
- 23 Jun, 2014 1 commit
-
-
Sergei Golubchik authored
-
- 20 Jun, 2014 1 commit
-
-
Elena Stepanova authored
-
- 18 Jun, 2014 4 commits
-
-
Sergey Vojtovich authored
Stop spawning dummy threads on client library initialization Let's revert the fix for Bug#24507. To quote Monty from 2006: "After 1/2 a year, when all glibc versions are updated, we can delete this code." Note: The upstream glibc bug was fixed in 2006.
-
Sergey Vojtovich authored
Preserve CLIENT_REMEMBER_OPTIONS flag for compressed connections Code cleanup: removed reference to CLIENT_REMEMBER_OPTIONS from server code. This flag is ignored in MariaDB.
-
unknown authored
The INCIDENT_EVENT always caused slave error and abort, without checking --slave-skip-errors. Now, if error 1590, ER_SLAVE_INCIDENT is included in the --slave-skip-errors list, incident events will be ignored. This is a merge of this MySQL 5.6 patch: revision-id: frazer@mysql.com-20110314170916-ypgin17otj3ucx95 committer: Frazer Clement <frazer@mysql.com> timestamp: Mon 2011-03-14 17:09:16 +0000 message: Bug#11799671 NOT POSSIBLE TO SKIP INCIDENT ERRORS
-
Sergey Vojtovich authored
Use single quotes for perl paths, in case of special symbols Double-quoted string literals are subject to backslash and variable substitution.
-
- 10 Jun, 2014 1 commit
-
-
Sergey Vojtovich authored
Fixed some compilation errors/warnings with ASan.
-
- 13 Jun, 2014 1 commit
-
-
Sergei Golubchik authored
-
- 12 Jun, 2014 1 commit
-
-
Sergei Golubchik authored
-
- 11 Jun, 2014 5 commits
-
-
Sergei Golubchik authored
-
Alexey Botchkov authored
Some variables weren't cleared properly so consequitive embedded server start/stop failed. Cleanups added. Also mysql_client_test.c extended to test that (taken from Mattias Johnson's patch)
-
Sergei Golubchik authored
When plugin=mysql_native_password (or mysql_old_password) take the password from *either* password *or* authentication_string, whichever is set. This makes no sense, but alas, that's what MySQL-5.6 does.
-
Sergei Golubchik authored
fix for ranges like "indexed_datetime OP time" (test case is in the previous revision)
-
Sergei Golubchik authored
fix for ref like "indexed_time = datetime"
-
- 09 Jun, 2014 2 commits
-
-
Sergei Golubchik authored
(prep for MDEV-6065)
-
Sergei Golubchik authored
-
- 10 Jun, 2014 4 commits
-
-
Igor Babaev authored
-
Sergey Petrunya authored
-
Sergey Petrunya authored
-
Igor Babaev authored
-