Commit 47edc4ca authored by Jon Olav Hauglid's avatar Jon Olav Hauglid

Bug #50124 Rpl failure on DROP table with concurrent txn/non-txn

           DML flow and SAVEPOINT

The problem was that replication could break if a transaction involving
both transactional and non-transactional tables was rolled back to a
savepoint. It broke if a concurrent connection tried to drop a
transactional table which was locked after the savepoint was set.
This DROP TABLE completed when ROLLBACK TO SAVEPOINT was executed as the
lock on the table was dropped by the transaction. When the slave later
tried to apply the binlog, it would fail as the table would already
have been dropped.

The reason for the problem is that transactions involving both
transactional and non-transactional tables are written fully to the
binlog during ROLLBACK TO SAVEPOINT. At the same time, metadata locks
acquired after a savepoint, were released during ROLLBACK TO SAVEPOINT.
This allowed a second connection to drop a table only used between
SAVEPOINT and ROLLBACK TO SAVEPOINT. Which caused the transaction binlog
to refer to a non-existing table when it was written during ROLLBACK
TO SAVEPOINT.

This patch fixes the problem by not releasing metadata locks when
ROLLBACK TO SAVEPOINT is executed if binlogging is enabled.
parent a5d9e0e0
stop slave;
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
reset master;
reset slave;
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
start slave;
#
# Bug#50124 Rpl failure on DROP table with concurrent txn/non-txn
# DML flow and SAVEPOINT
#
# Connection master
DROP TABLE IF EXISTS tt, nt;
CREATE TABLE tt (i INT) ENGINE = InnoDB;
CREATE TABLE nt (i INT) ENGINE = MyISAM;
FLUSH LOGS;
START TRANSACTION;
INSERT INTO nt VALUES (1);
SAVEPOINT insert_statement;
INSERT INTO tt VALUES (1);
# Connection master1
# Sending:
DROP TABLE tt;
# Connection master
ROLLBACK TO SAVEPOINT insert_statement;
Warnings:
Warning 1196 Some non-transactional changed tables couldn't be rolled back
COMMIT;
# Connection master1
# Reaping: DROP TABLE tt
FLUSH LOGS;
# Connection master
DROP TABLE nt;
--source include/master-slave.inc
--source include/have_innodb.inc
--echo #
--echo # Bug#50124 Rpl failure on DROP table with concurrent txn/non-txn
--echo # DML flow and SAVEPOINT
--echo #
--echo # Connection master
connection master;
--disable_warnings
DROP TABLE IF EXISTS tt, nt;
--enable_warnings
CREATE TABLE tt (i INT) ENGINE = InnoDB;
CREATE TABLE nt (i INT) ENGINE = MyISAM;
FLUSH LOGS;
START TRANSACTION;
INSERT INTO nt VALUES (1);
SAVEPOINT insert_statement;
INSERT INTO tt VALUES (1);
--echo # Connection master1
connection master1;
--echo # Sending:
--send DROP TABLE tt
--echo # Connection master
connection master;
let $wait_condition=
SELECT COUNT(*) = 1 FROM information_schema.processlist
WHERE state = "Waiting for table" AND info = "DROP TABLE tt";
--source include/wait_condition.inc
ROLLBACK TO SAVEPOINT insert_statement;
COMMIT;
--echo # Connection master1
connection master1;
--echo # Reaping: DROP TABLE tt
--reap
FLUSH LOGS;
--echo # Connection master
connection master;
DROP TABLE nt;
--source include/master-slave-end.inc
...@@ -419,9 +419,13 @@ bool trans_rollback_to_savepoint(THD *thd, LEX_STRING name) ...@@ -419,9 +419,13 @@ bool trans_rollback_to_savepoint(THD *thd, LEX_STRING name)
thd->transaction.savepoints= sv; thd->transaction.savepoints= sv;
/* /*
Release metadata locks that were acquired during this savepoint unit. Release metadata locks that were acquired during this savepoint unit
unless binlogging is on. Releasing locks with binlogging on can break
replication as it allows other connections to drop these tables before
rollback to savepoint is written to the binlog.
*/ */
if (!res) bool binlog_on= mysql_bin_log.is_open() && thd->variables.sql_log_bin;
if (!res && !binlog_on)
thd->mdl_context.rollback_to_savepoint(sv->mdl_savepoint); thd->mdl_context.rollback_to_savepoint(sv->mdl_savepoint);
DBUG_RETURN(test(res)); DBUG_RETURN(test(res));
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment