1. 18 May, 2012 3 commits
    • Tor Didriksen's avatar
      Bug#14039955 RPAD FUNCTION LEADS TO UNINITIALIZED VALUES WARNING IN MY_STRTOD · 02f90402
      Tor Didriksen authored
      Rewrite the "parser" in my_strtod_int() to avoid
      reading past the end of the input string.
      02f90402
    • Rohit Kalhans's avatar
      upmerge from mysql-5.1 -> mysql-5.5. · b4ffce10
      Rohit Kalhans authored
      b4ffce10
    • Rohit Kalhans's avatar
      BUG#14005409 - 64624 · 781137c0
      Rohit Kalhans authored
            
      Problem: After the fix for Bug#12589870, a new field that
      stores the length of db name was added in the buffer that
      stores the query to be executed. Unlike for the plain user
      session, the replication execution did not allocate the
      necessary chunk in Query-event constructor. This caused an
      invalid read while accessing this field.
            
      Solution: We fix this problem by allocating a necessary chunk
      in the buffer created in the Query_log_event::Query_log_event()
      and store the length of database name.
      
      sql/log_event.cc:
        Added a new field in the buffer created in the
        Query_log_event's constructor and store the length
        of database name.
      781137c0
  2. 17 May, 2012 7 commits
    • Mayank Prasad's avatar
      Bug#11766101 : 59140: LIKE CONCAT('%',@A,'%') DOESN'T MATCH WHEN @A CONTAINS LATIN1 STRING · 0581d1c4
      Mayank Prasad authored
      Issue/Cause:
      Issue is of memory corruption.During optimization phase, pattern to be matched in where 
      clause, is prepared. This is done in Item_func_concat::val_str() function which forms the
      resultant string (tmp_value) and return its pointer. In caller, Item_func_like::fix_fields, 
      pattern is made to point to this string (tmp_value). In further processing, tmp_value is 
      getting modified which causes pattern to have changed/wrong values.
      
      Fix:
      Allocate its own memroy location in caller, copy value of resultant string (tmp_value) 
      into that and make pattern to point to that. This makes sure no further changes to 
      tmp_value will affect pattern.
      
      0581d1c4
    • Gopal Shankar's avatar
      null merge for Bug#12636001 · 6053cb8c
      Gopal Shankar authored
      6053cb8c
    • Gopal Shankar's avatar
      Bug#12636001 : deadlock from thd_security_context · 21faded5
      Gopal Shankar authored
      PROBLEM:
      Threads end-up in deadlock due to locks acquired as described
      below,
      
      con1: Run Query on a table. 
        It is important that this SELECT must back-off while
        trying to open the t1 and enter into wait_for_condition().
        The SELECT then is blocked trying to lock mysys_var->mutex
        which is held by con3. The very significant fact here is
        that mysys_var->current_mutex will still point to LOCK_open,
        even if LOCK_open is no longer held by con1 at this point.
      
      con2: Try dropping table used in con1 or query some table.
        It will hold LOCK_open and be blocked trying to lock
        kernel_mutex held by con4.
      
      con3: Try killing the query run by con1.
        It will hold THD::LOCK_thd_data belonging to con1 while
        trying to lock mysys_var->current_mutex belonging to con1.
        But current_mutex will point to LOCK_open which is held
        by con2.
      
      con4: Get innodb engine status
        It will hold kernel_mutex, trying to lock THD::LOCK_thd_data
        belonging to con1 which is held by con3.
      
      So while technically only con2, con3 and con4 participate in the
      deadlock, con1's mysys_var->current_mutex pointing to LOCK_open
      is a vital component of the deadlock.
      
      CYCLE = (THD::LOCK_thd_data -> LOCK_open ->
               kernel_mutex -> THD::LOCK_thd_data)
      
      FIX:
      LOCK_thd_data has responsibility of protecting,
      1) thd->query, thd->query_length
      2) VIO
      3) thd->mysys_var (used by KILL statement and shutdown)
      4) THD during thread delete.
      
      Among above responsibilities, 1), 2)and (3,4) seems to be three
      independent group of responsibility. If there is different LOCK
      owning responsibility of (3,4), the above mentioned deadlock cycle
      can be avoid. This fix introduces LOCK_thd_kill to handle
      responsibility (3,4), which eliminates the deadlock issue.
      
      Note: The problem is not found in 5.5. Introduction MDL subsystem 
      caused metadata locking responsibility to be moved from TDC/TC to
      MDL subsystem. Due to this, responsibility of LOCK_open is reduced. 
      As the use of LOCK_open is removed in open_table() and 
      mysql_rm_table() the above mentioned CYCLE does not form.
      Revision ID for changes,
      open_table() = dlenev@mysql.com-20100727133458-m3ua9oslnx8fbbvz
      mysql_rm_table() = jon.hauglid@oracle.com-20101116100012-kxep9txz2fxy3nmw
      
      21faded5
    • unknown's avatar
      No commit message · b45c7502
      unknown authored
      No commit message
      b45c7502
    • unknown's avatar
      No commit message · 3e1758d3
      unknown authored
      No commit message
      3e1758d3
    • unknown's avatar
      No commit message · 1bfd04ea
      unknown authored
      No commit message
      1bfd04ea
    • unknown's avatar
      No commit message · 932376f9
      unknown authored
      No commit message
      932376f9
  3. 16 May, 2012 12 commits
    • Annamalai Gurusami's avatar
      dfff1ff5
    • Annamalai Gurusami's avatar
      Bug #13943231: ALTER TABLE AFTER DISCARD MAY CRASH THE SERVER · ce9e6b5a
      Annamalai Gurusami authored
      The following scenario crashes our mysql server:
      
      1.  set global innodb_file_per_table=1;
      2.  create table t1(c1 int) engine=innodb;
      3.  alter table t1 discard tablespace;
      4.  alter table t1 add unique index(c1);
      
      Step 4 crashes the server.  This patch introduces a check on discarded
      tablespace to avoid the crash.
      
      rb://1041 approved by Marko Makela
      ce9e6b5a
    • Annamalai Gurusami's avatar
      Merge from mysql-5.1 to mysql-5.5. · 31a6bcbe
      Annamalai Gurusami authored
      31a6bcbe
    • Venkata Sidagam's avatar
      Null merge from 5.1 to 5.5 · 79af7db5
      Venkata Sidagam authored
      79af7db5
    • Venkata Sidagam's avatar
      Bug #13955256: KEYCACHE CRASHES, CORRUPTIONS/HANGS WITH, · 6b05e434
      Venkata Sidagam authored
                     FULLTEXT INDEX AND CONCURRENT DML.
      
      Problem Statement:
      ------------------
      1) Create a table with FT index.
      2) Enable concurrent inserts.
      3) In multiple threads do below operations repeatedly
         a) truncate table
         b) insert into table ....
         c) select ... match .. against .. non-boolean/boolean mode
      
      After some time we could observe two different assert core dumps
      
      Analysis:
      --------
      1)assert core dump at key_read_cache():
      Two select threads operating in-parallel on same key 
      root block.
      1st select thread block->status is set to BLOCK_ERROR 
      because the my_pread() in read_block() is returning '0'. 
      Truncate table made the index file size as 1024 and pread 
      was asked to get the block of count bytes(1024 bytes) 
      from offset of 1024 which it cannot read since its 
      "end of file" and retuning '0' setting 
      "my_errno= HA_ERR_FILE_TOO_SHORT" and the key_file_length, 
      key_root[0] is same i.e. 1024. Since block status has BLOCK_ERROR 
      the 1st select thread enter into the free_block() and will 
      be under wait on conditional mutex by making status as 
      BLOCK_REASSIGNED and goes for wait_on_readers(). Other select 
      thread will also work on the same block and sees the status as 
      BLOCK_ERROR and enters into free_block(), checks for BLOCK_REASSIGNED 
      and asserting the server.
      
      2)assert core dump at key_write_cache():
      One select thread and One insert thread.
      Select thread gets the unlocks the 'keycache->cache_lock', 
      which allows other threads to continue and gets the pread() 
      return value as'0'(please see the explanation above) and 
      tries to get the lock on 'keycache->cache_lock' and waits 
      there for the lock.
      Insert thread requests for the block, block will be assigned 
      from the hash list and makes the page_status as 
      'PAGE_WAIT_TO_BE_READ' and goes for the read_block(), waits 
      in the queue since there are some other threads performing 
      reads on the same block.
      Select thread which was waiting for the 'keycache->cache_lock' 
      mutex in the read_block() will continue after getting the my_pread() 
      value as '0' and sets the block status as BLOCK_ERROR and goes to 
      the free_block() and go to the wait_for_readers().
      Now the insert thread will awake and continues. and checks 
      block->status as not BLOCK_READ and it asserts.  
      
      Fix:
      ---
      In the full text code, multiple readers of index file is not guarded. 
      Hence added below below code in _ft2_search() and walk_and_match().
      
      to lock the key_root I have used below code in _ft2_search()
       if (info->s->concurrent_insert)
          mysql_rwlock_rdlock(&share->key_root_lock[0]);
      
      and to unlock 
       if (info->s->concurrent_insert)
         mysql_rwlock_unlock(&share->key_root_lock[0]);
      
      storage/myisam/ft_boolean_search.c:
        Since its a recursion function, to avoid confusion in taking and 
        releasing the locks, renamed _ft2_search() to _ft2_search_internal() 
        function. And _ft2_search() will take the lock, call 
        _ft2_search_internal() and release the lock in case of concurrent 
        inserts.
      storage/myisam/ft_nlq_search.c:
        Added read locks code in walk_and_match()
      6b05e434
    • Venkata Sidagam's avatar
      7244e84e
    • Annamalai Gurusami's avatar
      Bug #13943231: ALTER TABLE AFTER DISCARD MAY CRASH THE SERVER · 7a6c449c
      Annamalai Gurusami authored
      The following scenario crashes our mysql server:
      
      1.  set global innodb_file_per_table=1;
      2.  create table t1(c1 int) engine=innodb;
      3.  alter table t1 discard tablespace;
      4.  alter table t1 add unique index(c1);
      
      Step 4 crashes the server.  This patch introduces a check on discarded
      tablespace to avoid the crash.
      
      rb://1041 approved by Marko Makela
      7a6c449c
    • Annamalai Gurusami's avatar
      Merge from mysql-5.1 to mysql-5.5. · 453fcc0c
      Annamalai Gurusami authored
      453fcc0c
    • Venkata Sidagam's avatar
      Bug #13955256: KEYCACHE CRASHES, CORRUPTIONS/HANGS WITH, · 0e9c1e9f
      Venkata Sidagam authored
                     FULLTEXT INDEX AND CONCURRENT DML.
      
      Problem Statement:
      ------------------
      1) Create a table with FT index.
      2) Enable concurrent inserts.
      3) In multiple threads do below operations repeatedly
         a) truncate table
         b) insert into table ....
         c) select ... match .. against .. non-boolean/boolean mode
      
      After some time we could observe two different assert core dumps
      
      Analysis:
      --------
      1)assert core dump at key_read_cache():
      Two select threads operating in-parallel on same key 
      root block.
      1st select thread block->status is set to BLOCK_ERROR 
      because the my_pread() in read_block() is returning '0'. 
      Truncate table made the index file size as 1024 and pread 
      was asked to get the block of count bytes(1024 bytes) 
      from offset of 1024 which it cannot read since its 
      "end of file" and retuning '0' setting 
      "my_errno= HA_ERR_FILE_TOO_SHORT" and the key_file_length, 
      key_root[0] is same i.e. 1024. Since block status has BLOCK_ERROR 
      the 1st select thread enter into the free_block() and will 
      be under wait on conditional mutex by making status as 
      BLOCK_REASSIGNED and goes for wait_on_readers(). Other select 
      thread will also work on the same block and sees the status as 
      BLOCK_ERROR and enters into free_block(), checks for BLOCK_REASSIGNED 
      and asserting the server.
      
      2)assert core dump at key_write_cache():
      One select thread and One insert thread.
      Select thread gets the unlocks the 'keycache->cache_lock', 
      which allows other threads to continue and gets the pread() 
      return value as'0'(please see the explanation above) and 
      tries to get the lock on 'keycache->cache_lock' and waits 
      there for the lock.
      Insert thread requests for the block, block will be assigned 
      from the hash list and makes the page_status as 
      'PAGE_WAIT_TO_BE_READ' and goes for the read_block(), waits 
      in the queue since there are some other threads performing 
      reads on the same block.
      Select thread which was waiting for the 'keycache->cache_lock' 
      mutex in the read_block() will continue after getting the my_pread() 
      value as '0' and sets the block status as BLOCK_ERROR and goes to 
      the free_block() and go to the wait_for_readers().
      Now the insert thread will awake and continues. and checks 
      block->status as not BLOCK_READ and it asserts.  
      
      Fix:
      ---
      In the full text code, multiple readers of index file is not guarded. 
      Hence added below below code in _ft2_search() and walk_and_match().
      
      to lock the key_root I have used below code in _ft2_search()
       if (info->s->concurrent_insert)
          mysql_rwlock_rdlock(&share->key_root_lock[0]);
      
      and to unlock 
       if (info->s->concurrent_insert)
         mysql_rwlock_unlock(&share->key_root_lock[0]);
      
      storage/myisam/ft_boolean_search.c:
        Since its a recursion function, to avoid confusion in taking and 
        releasing the locks, renamed _ft2_search() to _ft2_search_internal() 
        function. And _ft2_search() will take the lock, call 
        _ft2_search_internal() and release the lock in case of concurrent 
        inserts.
      storage/myisam/ft_nlq_search.c:
        Added read locks code in walk_and_match()
      0e9c1e9f
    • Olav Sandstaa's avatar
      Fix for Bug#12667154 SAME QUERY EXEC AS WHERE SUBQ GIVES DIFFERENT · c5dc1ea5
      Olav Sandstaa authored
                           RESULTS ON IN() & NOT IN() COMP #3
      
      This bug causes a wrong result in mysql-trunk when ICP is used
      and bad performance in mysql-5.5 and mysql-trunk.
      
      Using the query from bug report to explain what happens and causes
      the wrong result from the query when ICP is enabled:
      
      1. The t3 table contains four records. The outer query will read
         these and for each of these it will execute the subquery.
      
      2. Before the first execution of the subquery it will be optimized. In
         this case the important is what happens to the first table t1:
         -make_join_select() will call the range optimizer which decides
          that t1 should be accessed using a range scan on the k1 index
          It creates a QUICK_RANGE_SELECT object for this.
         -As the last part of optimization the ICP code pushes the
          condition down to the storage engine for table t1 on the k1 index.
      
         This produces the following information in the explain for this table:
      
           2 DEPENDENT SUBQUERY t1 range k1 k1 5 NULL 3 Using index condition; Using filesort
      
         Note the use of filesort.
      
      3. The first execution of the subquery does (among other things) due
         to the need for sorting:
         a. Call create_sort_index() which again will call find_all_keys():
         b. find_all_keys() will read the required keys for all qualifying
            rows from the storage engine. To do this it checks if it has a
            quick-select for the table. It will use the quick-select for
            reading records. In this case it will read four records from the
            storage engine (based on the range criteria). The storage engine
            will evaluate the pushed index condition for each record.
         c. At the end of create_sort_index() there is code that cleans up a
            lot of stuff on the join tab. One of the things that is cleaned
            is the select object. The result of this is that the
            quick-select object created in make_join_select is deleted.
      
      4. The second execution of the subquery does the same as the first but
         the result is different:
         a. Call create_sort_index() which again will call find_all_keys()
            (same as for the first execution)
         b. find_all_keys() will read the keys from the storage engine. To
            do this it checks if it has a quick-select for the table. Now
            there is NO quick-select object(!) (since it was deleted in
            step 3c). So find_all_keys defaults to read the table using a
            table scan instead. So instead of reading the four relevant records
            in the range it reads the entire table (6 records). It then
            evaluates the table's condition (and here it goes wrong). Since
            the entire condition has been pushed down to the storage engine
            using ICP all 6 records qualify. (Note that the storage engine
            will not evaluate the pushed index condition in this case since
            it was pushed for the k1 index and now we do a table scan
            without any index being used).
            The result is that here we return six qualifying key values
            instead of four due to not evaluating the table's condition.
         c. As above.
      
      5. The two last execution of the subquery will also produce wrong results
         for the same reason.
      
      Summary: The problem occurs due to all but the first executions of the
      subquery is done as a table scan without evaluating the table's
      condition (which is pushed to the storage engine on a different
      index). This is caused by the create_sort_index() function deleting
      the quick-select object that should have been used for executing the
      subquery as a range scan.
      
      Note that this bug in addition to causing wrong results also can
      result in bad performance due to executing the subquery using a table
      scan instead of a range scan. This is an issue in MySQL 5.5.
      
      The fix for this problem is to avoid that the Quick-select-object that
      the optimizer created is deleted when create_sort_index() is doing
      clean-up of the join-tab. This will ensure that the quick-select
      object and the corresponding pushed index condition will be available
      and used by all following executions of the subquery.
      
      
      sql/sql_select.cc:
        Fix for Bug#12667154: Change how create_sort_index() cleans up the 
        join_tab's select and quick-select objects in order to avoid that a 
        quick-select object created outside of create_sort_index() is deleted.
      c5dc1ea5
    • unknown's avatar
      No commit message · 3780f473
      unknown authored
      No commit message
      3780f473
    • Annamalai Gurusami's avatar
      Bug #12752572 61579: REPLICATION FAILURE WHILE · bcb5d737
      Annamalai Gurusami authored
      INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER
      
      When an insert stmt like "insert into t values (1),(2),(3)" is
      executed, the autoincrement values assigned to these three rows are
      expected to be contiguous.  In the given lock mode
      (innodb_autoinc_lock_mode=1), the auto inc lock will be released
      before the end of the statement.  So to make the autoincrement
      contiguous for a given statement, we need to reserve the auto inc
      values at the beginning of the statement.  
      
      rb://1074 approved by Alexander Nozdrin
      bcb5d737
  4. 15 May, 2012 8 commits
  5. 14 May, 2012 1 commit
  6. 10 May, 2012 2 commits
    • Annamalai Gurusami's avatar
      326b40c9
    • Annamalai Gurusami's avatar
      Bug #14007649 65111: INNODB SOMETIMES FAILS TO UPDATE ROWS INSERTED · 391ea219
      Annamalai Gurusami authored
      BY A CONCURRENT TRANSACTIO
      
      The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
      a table handler clone. Innodb does not provide a clone operation.  
      The ha_innobase::clone() is not there. The handler::clone() does not 
      take care of the ha_innobase->prebuilt->select_lock_type.  Because of 
      this what happens is that for one index we do a locking read, and 
      for the other index we were doing a non-locking (consistent) read. 
      The patch introduces ha_innobase::clone() member function.  
      It is implemented similar to ha_myisam::clone().  It calls the 
      base class handler::clone() and then does any additional operation 
      required.  I am setting the ha_innobase->prebuilt->select_lock_type 
      correctly. 
      
      rb://1060 approved by Marko
      391ea219
  7. 09 May, 2012 1 commit
  8. 08 May, 2012 1 commit
  9. 07 May, 2012 3 commits
    • Joerg Bruehe's avatar
      Merge 5.5.24 back into main 5.5. · 5be07cea
      Joerg Bruehe authored
      This is a weave merge, but without any conflicts.
      In 14 source files, the copyright year needed to be updated to 2012.
      5be07cea
    • Venkata Sidagam's avatar
      Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY · 1d47bbe3
      Venkata Sidagam authored
                           CAUSES RESTORE PROBLEM
      
      Merging the fix from mysql-5.1 to mysql-5.5
      
      mysql-test/t/mysqldump.test:
        There is a difference in the testcase which is added as 
        part of this fix, when compared with mysql-5.1. In mysql-5.5 
        and mysql-5.6, "DROP mysql database" fails by enabling 
        logging, hence removed those lines.
      1d47bbe3
    • Venkata Sidagam's avatar
      Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY · e7364ec2
      Venkata Sidagam authored
                           CAUSES RESTORE PROBLEM
      Problem Statement:
      ------------------
      mysqldump is not having the dump stmts for general_log and slow_log
      tables. That is because of the fix for Bug#26121. Hence, after 
      dropping the mysql database, and applying the dump by enabling the 
      logging, "'general_log' table not found" errors are logged into the 
      server log file.
      
      Analysis:
      ---------
      As part of the fix for Bug#26121, we skipped the dumping of tables 
      for general_log and slow_log, because the data dump of those tables 
      are taking LOCKS, which is not allowed for log tables.
      
      Fix:
      ----
      We came up with an approach that instead of taking both meta data 
      and data dump information for those tables, take only the meta data 
      dump which doesn't need LOCKS.
      As part of fixing the issue we came up with below algorithm.
      Design before fix:
      1) mysql database is having tables like db, event,... general_log,
         ... slow_log...
      2) Skip general_log and slow_log while preparing the tables list
      3) Take the TL_READ lock on tables which are present in the table 
         list and do 'show create table'.
      4) Release the lock.
      
      Design with the fix:
      1) mysql database is having tables like db, event,... general_log,
         ... slow_log...
      2) Skip general_log and slow_log while preparing the tables list
      3) Explicitly call the 'show create table' for general_log and 
         slow_log
      3) Take the TL_READ lock on tables which are present in the table 
         list and do 'show create table'.
      4) Release the lock.
      
      While taking the meta data dump for general_log and slow_log the 
      "CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
      This is because we skipped "DROP TABLE" for those tables, 
      "DROP TABLE" fails for these tables if logging is enabled. 
      Customer is applying the dump by enabling logging so, if the dump 
      has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
      stmts for those tables.
        
      After the fix we could observe "Table 'mysql.general_log' 
      doesn't exist" errors initially that is because in the customer 
      scenario they are dropping the mysql database by enabling the 
      logging, Hence, those errors are expected. Once we apply the 
      dump which is taken before the "drop database mysql", the errors 
      will not be there.
      
      client/mysqldump.c:
        In get_table_structure() added code to skip the DROP TABLE stmts for general_log
        and slow_log tables, because when logging is enabled those stmts will fail. And
        replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make 
        sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
        those tables.
        In dump_all_tables_in_db() added code to call get_table_structure() for 
        general_log and slow_log tables.
      mysql-test/r/mysqldump.result:
        Added a test as part of fix for Bug #11754178
      mysql-test/t/mysqldump.test:
        Added a test as part of fix for Bug #11754178
      e7364ec2
  10. 04 May, 2012 2 commits
    • Venkata Sidagam's avatar
      Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY · daafaa0f
      Venkata Sidagam authored
                           CAUSES RESTORE PROBLEM
      Problem Statement:
      ------------------
      mysqldump is not having the dump stmts for general_log and slow_log
      tables. That is because of the fix for Bug#26121. Hence, after 
      dropping the mysql database, and applying the dump by enabling the 
      logging, "'general_log' table not found" errors are logged into the 
      server log file.
      
      Analysis:
      ---------
      As part of the fix for Bug#26121, we skipped the dumping of tables 
      for general_log and slow_log, because the data dump of those tables 
      are taking LOCKS, which is not allowed for log tables.
      
      Fix:
      ----
      We came up with an approach that instead of taking both meta data 
      and data dump information for those tables, take only the meta data 
      dump which doesn't need LOCKS.
      As part of fixing the issue we came up with below algorithm.
      Design before fix:
      1) mysql database is having tables like db, event,... general_log,
         ... slow_log...
      2) Skip general_log and slow_log while preparing the tables list
      3) Take the TL_READ lock on tables which are present in the table 
         list and do 'show create table'.
      4) Release the lock.
      
      Design with the fix:
      1) mysql database is having tables like db, event,... general_log,
         ... slow_log...
      2) Skip general_log and slow_log while preparing the tables list
      3) Explicitly call the 'show create table' for general_log and 
         slow_log
      3) Take the TL_READ lock on tables which are present in the table 
         list and do 'show create table'.
      4) Release the lock.
      
      While taking the meta data dump for general_log and slow_log the 
      "CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
      This is because we skipped "DROP TABLE" for those tables, 
      "DROP TABLE" fails for these tables if logging is enabled. 
      Customer is applying the dump by enabling logging so, if the dump 
      has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
      stmts for those tables.
        
      After the fix we could observe "Table 'mysql.general_log' 
      doesn't exist" errors initially that is because in the customer 
      scenario they are dropping the mysql database by enabling the 
      logging, Hence, those errors are expected. Once we apply the 
      dump which is taken before the "drop database mysql", the errors 
      will not be there.
      
      client/mysqldump.c:
        In get_table_structure() added code to skip the DROP TABLE stmts for general_log
        and slow_log tables, because when logging is enabled those stmts will fail. And
        replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make 
        sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
        those tables.
        In dump_all_tables_in_db() added code to call get_table_structure() for 
        general_log and slow_log tables.
      mysql-test/r/mysqldump.result:
        Added a test as part of fix for Bug #11754178
      mysql-test/t/mysqldump.test:
        Added a test as part of fix for Bug #11754178
      daafaa0f
    • Annamalai Gurusami's avatar
      In perl, to break out of a foreach loop we need to use · b757c130
      Annamalai Gurusami authored
      the keyword "last" and not "break".  Fixing the failing
      test case. 
      b757c130