Fix error handling for GTID and domain-based parallel replication
This occurs when replication stops with an error, domain-based parallel replication is used, and the GTID position contains more than one domain. Furthermore, it relates to the case where the SQL thread is restarted without first stopping the IO thread. In this case, the file/offset relay-log position does not correctly represent the slave's multi-dimensional position, because other domains may be far ahead of, or behind, the domain with the failing event. So the code reverts the relay log position back to the start of a relay log file that is known to be before all active domains. There was a bug that when the SQL thread was restarted, the rli->relay_log_state was incorrectly initialised from @@gtid_slave_pos. This position will likely be too far ahead, due to reverting the relay log position. Thus, if the replication fails again after the SQL thread restart, the rli->restart_gtid_pos might be updated incorrectly. This in turn would cause a second SQL thread restart to replicate from the wrong position, if the IO thread was still left running. The fix is to initialise rli->relay_log_state from @@gtid_slave_pos only when we actually purge and re-fetch relay logs from the master, not at every SQL thread start. A related problem is the use of sql_slave_skip_counter to resolve replication failures in this kind of scenario. Since the slave position is multi-dimensional, sql_slave_skip_counter can not work properly - it is indeterminate exactly which event is to be skipped, and is unlikely to work as expected for the user. So make this an error in the case where domain-based parallel replication is used with multiple domains, suggesting instead the user to set @@gtid_slave_pos to reliably skip the desired event.
Showing
Please register or sign in to comment