-
unknown authored
When flushing tables, there were a slight chance that the flush was occuring between processing of two table map events. Since the tables are opened one by one, it might result in that the tables were not valid and that sub- sequent locking of tables would cause the slave to crash. The problem is solved by opening and locking all tables at once using simple_open_n_lock_tables(). Also, the patch contain a change to open_tables() so that pre-locking only takes place when the trg_event_map is not zero, which was not the case before (this caused the lock to be placed in thd->locked_tables instead of thd->lock since the assumption was that triggers would be called later and therefore the tables should be pre-locked). mysql-test/suite/rpl/r/rpl_found_rows.result: Result change mysql-test/suite/rpl/r/rpl_row_inexist_tbl.result: Result change mysql-test/suite/rpl/t/rpl_found_rows.test: Adding drop of table that was created in test. mysql-test/suite/rpl/t/rpl_slave_status.test: Adding waits for slave start and stop to ensure that test works. sql/log_event.cc: Replacing table-by-table open and lock with a single call to simple_open_n_lock_tables(), which in turn required some changes to other code. sql/log_event_old.cc: Replacing table-by-table open and lock with a single call to simple_open_n_lock_tables(), which in turn required some changes to other code. sql/sql_base.cc: Extending check inside open_tables() so that pre-locking in only done if tables->trg_egent_map is non-zero. mysql-test/include/wait_for_slave_sql_to_start.inc: New BitKeeper file ``mysql-test/include/wait_for_slave_sql_to_start.inc''
499d7bec