1. 16 Nov, 2017 2 commits
  2. 15 Nov, 2017 2 commits
    • Andrei Elkin's avatar
      MDEV-9510 Segmentation fault in binlog thread causes crash · c7e38076
      Andrei Elkin authored
      With combination of --log-bin and Galera the server may crash
      reporting two characteristic stacks:
      
        /usr/sbin/mysqld(_ZN13MYSQL_BIN_LOG13mark_xid_doneEmb+0xc7)[0x7f182a8e2cb7]
        /usr/sbin/mysqld(binlog_background_thread+0x2b5)[0x7f182a8e3275]
      
      or
      
        /usr/sbin/mysqld(_ZN13MYSQL_BIN_LOG21do_checkpoint_requestEm+0x9d)[0x7ff395b2dafd]
        /usr/sbin/mysqld(_ZN13MYSQL_BIN_LOG20checkpoint_and_purgeEm+0x11)[0x7ff395b2db91]
        /usr/sbin/mysqld(_ZN13MYSQL_BIN_LOG16rotate_and_purgeEb+0xc2)[0x7ff395b300b2]
      
      The reason of the failure appears to be non-matching decrements for
        `xid_count_per_binlog::xid_count`
      which can occur when a transaction is executed having its connection issued
      `SET @@sql_log_bin=0`. In such case the xid count is not incremented but
      its decrements still runs to turn `binlog_xid_count_list` into improper state
      which the following FLUSH BINARY LOGS exposes through the crash.
      
      *Note_1*: the regression test reuses an existing galera.sql_log_bin
      which does not run stably (even in its base form) by mtr with --log-bin.
      
      *Note_2*: 10.0-galera branch is free of this issue having missed MDEV-7205
      fixes.
      c7e38076
    • Andrei Elkin's avatar
      MDEV-12012/MDEV-11969 Can't remove GTIDs for a stale GTID Domain ID · aae49327
      Andrei Elkin authored
      As reported in MDEV-11969 "there's no way to ditch knowledge" about some
      domain that is no longer updated on a server. Besides being of annoyance to
      clutter output in DBA console stale domains can prevent the slave
      to connect the master as MDEV-12012 witnesses.
      What domain is obsolete must be evaluated by the user (DBA) according
      to whether the domain info is still relevant and will the domain ever
      receive any update.
      
      This patch introduces a method to discard obsolete gtid domains from
      the server binlog state. The removal requires no event group from such
      domain present in existing binlog files though. If there are any the
      containing logs must be first PURGEd in order for
      
        FLUSH BINARY LOGS DELETE_DOMAIN_ID=(list-of-domains)
      
      succeed. Otherwise the command returns an error.
      
      The list of obsolete domains can be computed through
      intersecting two sets - the earliest (first) binlog's Gtid_list
      and the current value of @@global.gtid_binlog_state - and extracting
      the domain id components from the intersection list items.
      The new DELETE_DOMAIN_ID featured FLUSH continues to rotate binlog
      omitting the deleted domains from the active binlog file's Gtid_list.
      Notice though when the command is ineffective - that none of requested to delete
      domain exists in the binlog state - rotation does not occur.
      
      Obsolete domain deletion is not harmful for connected slaves as long
      as master side binlog files *purge* is synchronized with FLUSH-DELETE_DOMAIN_ID.
      The slaves must have the last event from purged files processed as usual,
      in order not to bump later into requesting a gtid from a file which
      was already gone.
      While the command is not replicated (as ordinary FLUSH BINLOG LOGS is)
      slaves, even though having extra domains, won't suffer from reconnection errors
      thanks to master-slave gtid connection protocol allowing the master
      to be ignorant about a gtid domain.
      Should at failover such slave to be promoted into master role it may run
      the ex-master's
      
       FLUSH BINARY LOGS DELETE_DOMAIN_ID=(list-of-domains)
      
      to clean its own binlog state.
      
      NOTES.
        suite/perfschema/r/start_server_low_digest.result
      is re-recorded as consequence of internal parser codes changes.
      aae49327
  3. 14 Nov, 2017 2 commits
  4. 13 Nov, 2017 2 commits
  5. 10 Nov, 2017 4 commits
  6. 09 Nov, 2017 8 commits
  7. 08 Nov, 2017 14 commits
  8. 07 Nov, 2017 3 commits
  9. 06 Nov, 2017 3 commits