1. 19 Dec, 2014 1 commit
  2. 18 Dec, 2014 1 commit
    • Kristian Nielsen's avatar
      MDEV-7342: Test failure in perfschema.setup_instruments_defaults · 826d7c68
      Kristian Nielsen authored
      Fix a possible race in the test case when restarting the server.
      
      Make sure we have disconnected before waiting for the reconnect
      that signals that the server is back up. Otherwise, we may in
      rare cases continue the test while the old server is shutting
      down, eventually leading to "connection lost" failure.
      826d7c68
  3. 12 Dec, 2014 3 commits
  4. 10 Dec, 2014 1 commit
  5. 07 Dec, 2014 1 commit
  6. 05 Dec, 2014 2 commits
  7. 04 Dec, 2014 1 commit
  8. 03 Dec, 2014 6 commits
  9. 02 Dec, 2014 4 commits
  10. 03 Dec, 2014 1 commit
    • Kristian Nielsen's avatar
      MDEV-4393: show_explain.test times out randomly · d79cce86
      Kristian Nielsen authored
      The problem was a race between the debug code in the server and the SHOW
      EXPLAIN FOR in the test case.
      
      The test case would wait for a query to reach the first point of interest
      (inside dbug_serve_apcs()), then send it a SHOW EXPLAIN FOR, then wait for the
      query to reach the next point of interest. However, the second wait was
      insufficient. It was possible for the the second wait to complete immediately,
      causing both the first and the second SHOW EXPLAIN FOR to hit the same
      invocation of dbug_server_apcs(). Then a later invocation would miss its
      intended SHOW EXPLAIN FOR and hang, and the test case would eventually time
      out.
      
      Fix is to make sure that the second wait can not trigger during the first
      invocation of dbug_server_apcs(). We do this by clearing the thd_proc_info
      (that the wait is looking for) before processing the SHOW EXPLAIN FOR; this
      way the second wait can not start until the thd_proc_info from the first
      invocation has been cleared.
      d79cce86
  11. 02 Dec, 2014 3 commits
  12. 01 Dec, 2014 5 commits
  13. 22 Nov, 2014 2 commits
  14. 01 Dec, 2014 1 commit
    • Kristian Nielsen's avatar
      MDEV-7237: Parallel replication: incorrect relaylog position after stop/start the slave · 52b25934
      Kristian Nielsen authored
      The replication relay log position was sometimes updated incorrectly at the
      end of a transaction in parallel replication. This happened because the relay
      log file name was taken from the current Relay_log_info (SQL driver thread),
      not the correct value for the transaction in question.
      
      The result was that if a transaction was applied while the SQL driver thread
      was at least one relay log file ahead, _and_ the SQL thread was subsequently
      stopped before applying any events from the most recent relay log file, then
      the relay log position would be incorrect - wrong relay log file name. Thus,
      when the slave was started again, usually a relay log read error would result,
      or in rare cases, if the position happened to be readable, the slave might
      even skip arbitrary amounts of events.
      
      In GTID mode, the relay log position is reset when both slave threads are
      restarted, so this bug would only be seen in non-GTID mode, or in GTID mode
      when only the SQL thread, not the IO thread, was stopped.
      52b25934
  15. 28 Nov, 2014 1 commit
  16. 27 Nov, 2014 2 commits
    • Kristian Nielsen's avatar
      MDEV-7037: MariaDB 10.0 does not build on Debian / kfreebsd-i386/amd64 due to... · 74e581b7
      Kristian Nielsen authored
      MDEV-7037: MariaDB 10.0 does not build on Debian / kfreebsd-i386/amd64 due to MTR failure: multi_source.gtid
      MDEV-7106: Sporadic test failure in multi_source.gtid
      MDEV-7153: Yet another sporadic failure of multi_source.gtid in buildbot
      
      This patch fixes three races in the multi_source.gtid test case that could
      cause sporadic failures:
      
      1. Do not put SHOW ALL SLAVES STATUS in the output, the output is not stable.
      
      2. Ensure that slave1 has replicated as far as expected, before stopping its
      connection to master1 (otherwise the following wait will time out due to rows
      not replicated from master1).
      
      3. Ensure that slave2 has replicated far enough before connecting slave1 to it
      (otherwise we get an error during connect that slave1 is ahead of slave2).
      
      74e581b7
    • Alexander Barkov's avatar
      Backporting a cleanup in boolean function from 10.1: · 5ae1639c
      Alexander Barkov authored
      Moving Item_bool_func2 and Item_func_opt_neg from Item_int_func to
      Item_bool_func. Now all functions that return is_bool_func()=true
      have a common root class Item_bool_func.
      This change is needed to fix MDEV-7149 properly.
      5ae1639c
  17. 26 Nov, 2014 3 commits
    • Jan Lindström's avatar
      Better comments part 2 with proof and simplified implementation. · e15a83c0
      Jan Lindström authored
      Thanks to Daniel Black.
      e15a83c0
    • Jan Lindström's avatar
      MDEV-7214: Test failure in main.partition_innodb · 43054872
      Jan Lindström authored
      Problem is on test it tried to verify that no files were left
      on test database.
      
      Fix: There's no need to list other file types, it can only 
      list *.par files
      43054872
    • Kristian Nielsen's avatar
      MDEV-6582: DEBUG_SYNC does not reset mysys_var->current_mutex, causes... · 06d0d090
      Kristian Nielsen authored
      MDEV-6582: DEBUG_SYNC does not reset mysys_var->current_mutex, causes assertion "Trying to unlock mutex that wasn't locked"
      
      The bug was in DEBUG_SYNC. When waiting, debug_sync_execute() temporarily sets
      thd->mysys_var->current_mutex to a new value while waiting. However, if the
      old value of current_mutex was NULL, it was not restored, current_mutex
      remained set to the temporary value (debug_sync_global.ds_mutex).
      
      This made possible the following race: Thread T1 goes to KILL thread T2. In
      THD::awake(), T1 loads T2->mysys_var->current_mutex, it is set to ds_mutex, T1
      locks this mutex.
      
      Now T2 runs, it does ENTER_COND, it sets T2->mysys_var->current_mutex to
      LOCK_wait_commit (for example).
      
      Then T1 resumes, it reloads mysys_var->current_mutex, now it is set to
      LOCK_wait_commit, T1 unlocks this mutex instead of the ds_mutex that it locked
      previously.
      
      This causes safe_mutex to assert with the message: "Trying to unlock mutex
      LOCK_wait_commit that wasn't locked".
      
      The fix is to ensure that DEBUG_SYNC also will restore
      mysys_var->current_mutex in the case where the original value was NULL.
      
      06d0d090
  18. 25 Nov, 2014 2 commits
    • Kristian Nielsen's avatar
      MDEV-7179: rpl.rpl_gtid_crash failed in buildbot with Warning: database page corruption or a failed · e79b7ca9
      Kristian Nielsen authored
      I saw two test failures in rpl.rpl_gtid_crash where we get this in the error
      log:
      
      141123 12:47:54 [Note] InnoDB: Restoring possible half-written data pages 
      141123 12:47:54 [Note] InnoDB: from the doublewrite buffer...
      InnoDB: Warning: database page corruption or a failed
      InnoDB: file read of space 6 page 3.
      InnoDB: Trying to recover it from the doublewrite buffer.
      141123 12:47:54 [Note] InnoDB: Recovered the page from the doublewrite buffer.
      
      This test case deliberately crashes the server, and if this crash happens
      right in the middle of writing a buffer pool page to disk, it is not
      unexpected that we can get a half-written page. The page is recovered
      correctly from the doublewrite buffer.
      
      So this patch adds a suppression for this warning in the error log for this
      test case.
      e79b7ca9
    • Kristian Nielsen's avatar
      MDEV-6903: gtid_slave_pos is incorrect after master crash · b7968590
      Kristian Nielsen authored
      When a master slave restarts, it logs a special restart format description
      event in its binlog. When the slave sees this event, it knows it needs to roll
      back any active partial transaction, in case the master crashed previously in
      the middle of writing such transaction to its binlog.
      
      However, there was a bug where this rollback did not reset rgi->pending_gtid.
      This caused the @@gtid_slave_pos to be updated incorrectly with the GTID of
      the partial transaction that was rolled back.
      
      Fix this by always clearing rgi->pending_gtid in cleanup_context(), hopefully
      preventing similar bugs from turning up in other special cases where a
      transaction is rolled back during replication.
      
      Thanks to Pavel Ivanov for tracking down the issue and providing a test case.
      b7968590