1. 21 Aug, 2013 6 commits
    • Marko Mäkelä's avatar
      (Null) merge mysql-5.1 to mysql-5.5. · 4c81d5b3
      Marko Mäkelä authored
      4c81d5b3
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to working copy. · e7263062
      Marko Mäkelä authored
      e7263062
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to mysql-5.5. · 1676823e
      Marko Mäkelä authored
      1676823e
    • Marko Mäkelä's avatar
      Bug#12560151 61132: infinite loop in buf_page_get_gen() when handling · ec2389dc
      Marko Mäkelä authored
      compressed pages
      
      After loading a compressed-only page in buf_page_get_gen() we allocate a new
      block for decompression. The problem is that the compressed page is neither
      buffer-fixed nor I/O-fixed by the time we call buf_LRU_get_free_block(),
      so it may end up being evicted and returned back as a new block.
      
      buf_page_get_gen(): Temporarily buffer-fix the compressed-only block
      while allocating memory for an uncompressed page frame.
      This should prevent this form of the infinite loop, which is more likely
      with a small innodb_buffer_pool_size.
      
      rb#2511 approved by Jimmy Yang, Sunny Bains
      ec2389dc
    • Praveenkumar Hulakund's avatar
      Bug#11765252 - READ OF FREED MEMORY WHEN "USE DB" AND · 7fffec87
      Praveenkumar Hulakund authored
                     "SHOW PROCESSLIST"
      
      Merging from 5.1 to 5.5
      7fffec87
    • Praveenkumar Hulakund's avatar
      Bug#11765252 - READ OF FREED MEMORY WHEN "USE DB" AND · 3b1e98d2
      Praveenkumar Hulakund authored
                     "SHOW PROCESSLIST"
      
      Analysis:
      ----------
      The problem here is, if one connection changes its
      default db and at the same time another connection executes
      "SHOW PROCESSLIST", when it wants to read db of the another
      connection then there is a chance of accessing the invalid
      memory. 
      
      The db name stored in THD is not guarded while changing user
      DB and while reading the user DB in "SHOW PROCESSLIST".
      So, if THD.db is freed by thd "owner" thread and if another
      thread executing "SHOW PROCESSLIST" statement tries to read
      and copy THD.db at the same time then we may endup in the issue
      reported here.
      
      Fix:
      ----------
      Used mutex "LOCK_thd_data" to guard THD.db while freeing it
      and while copying it to processlist.
      3b1e98d2
  2. 20 Aug, 2013 3 commits
    • Balasubramanian Kandasamy's avatar
      Reverted Release version · fcc00114
      Balasubramanian Kandasamy authored
      fcc00114
    • Balasubramanian Kandasamy's avatar
      bebc9ae5
    • Dmitry Lenev's avatar
      Fix for bug#14188793 - "DEADLOCK CAUSED BY ALTER TABLE DOEN'T CLEAR · fc2c6692
      Dmitry Lenev authored
      STATUS OF ROLLBACKED TRANSACTION" and bug #17054007 - "TRANSACTION
      IS NOT FULLY ROLLED BACK IN CASE OF INNODB DEADLOCK".
      
      The problem in the first bug report was that although deadlock involving
      metadata locks was reported using the same error code and message as InnoDB
      deadlock it didn't rollback transaction like the latter. This caused
      confusion to users as in some cases after ER_LOCK_DEADLOCK transaction
      could have been restarted immediately and in some cases rollback was
      required.
      
      The problem in the second bug report was that although InnoDB deadlock
      caused transaction rollback in all storage engines it didn't cause release
      of metadata locks. So concurrent DDL on the tables used in transaction was
      blocked until implicit or explicit COMMIT or ROLLBACK was issued in the
      connection which got InnoDB deadlock.
      
      The former issue has stemmed from the fact that when support for detection
      and reporting metadata locks deadlocks was added we erroneously assumed
      that InnoDB doesn't rollback transaction on deadlock but only last statement
      (while this is what happens on InnoDB lock timeout actually) and so didn't
      implement rollback of transactions on MDL deadlocks.
      
      The latter issue was caused by the fact that rollback of transaction due
      to deadlock is carried out by setting THD::transaction_rollback_request
      flag at the point where deadlock is detected and performing rollback
      inside of trans_rollback_stmt() call when this flag is set. And
      trans_rollback_stmt() is not aware of MDL locks, so no MDL locks are
      released.
      
      This patch solves these two problems in the following way:
      
      - In case when MDL deadlock is detect transaction rollback is requested
        by setting THD::transaction_rollback_request flag.
      
      - Code performing rollback of transaction if THD::transaction_rollback_request
        is moved out from trans_rollback_stmt(). Now we handle rollback request
        on the same level as we call trans_rollback_stmt() and release statement/
        transaction MDL locks.
      fc2c6692
  3. 19 Aug, 2013 1 commit
  4. 16 Aug, 2013 4 commits
    • Balasubramanian Kandasamy's avatar
      dummy commit · 1e904e36
      Balasubramanian Kandasamy authored
      1e904e36
    • Balasubramanian Kandasamy's avatar
      f1e23d73
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to mysql-5.5. · 553988a2
      Marko Mäkelä authored
      553988a2
    • Marko Mäkelä's avatar
      Bug#17312846 CHECK TABLE ASSERTION FAILURE · fb2a2d25
      Marko Mäkelä authored
      DICT_TABLE_GET_FORMAT(CLUST_INDEX->TABLE) >= 1
      
      The function row_sel_sec_rec_is_for_clust_rec() was incorrectly
      preparing to compare a NULL column prefix in a secondary index with a
      non-NULL column in a clustered index.
      
      This can trigger an assertion failure in 5.1 plugin and later. In the
      built-in InnoDB of MySQL 5.1 and earlier, we would apparently only do
      some extra work, by trimming the clustered index field for the
      comparison.
      
      The code might actually have worked properly apart from this debug
      assertion failure. It is merely doing some extra work in fetching a
      BLOB column, and then comparing it to NULL (which would return the
      same result, no matter what the BLOB contents is).
      
      While the test case involves CHECK TABLE, this could theoretically
      occur during any read that uses a secondary index on a column prefix
      of a column that can be NULL.
      
      rb#3101 approved by Mattias Jonsson
      fb2a2d25
  5. 15 Aug, 2013 2 commits
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to mysql-5.5. · 49ffda09
      Marko Mäkelä authored
      49ffda09
    • Marko Mäkelä's avatar
      Bug#17302896 DOUBLE PURGE ON ROLLBACK OF UPDATING A DELETE-MARKED RECORD · 31809607
      Marko Mäkelä authored
      There was a race condition in the rollback of TRX_UNDO_UPD_DEL_REC.
      
      Once row_undo_mod_clust() has rolled back the changes by the rolling-back
      transaction, it attempts to purge the delete-marked record, if possible, in a
      separate mini-transaction.
      
      However, row_undo_mod_remove_clust_low() fails to check if the DB_TRX_ID of
      the record that it found after repositioning the cursor, is still the same.
      If it is not, it means that the record was purged and another record was
      inserted in its place.
      
      So, the rollback would have performed an incorrect purge, breaking the
      locking rules and causing corruption.
      
      The problem was found by creating a table that contains a unique
      secondary index and a primary key, and two threads running REPLACE
      with only one value for the unique column, so that the uniqueness
      constraint would be violated all the time, leading to statement
      rollback.
      
      This bug exists in all InnoDB versions (I checked MySQL 3.23.53).
      It has become easier to repeat in 5.5 and 5.6 thanks to scalability
      improvements and a dedicated purge thread.
      
      rb#3085 approved by Jimmy Yang
      31809607
  6. 14 Aug, 2013 2 commits
  7. 12 Aug, 2013 5 commits
    • Anirudh Mangipudi's avatar
      Bug #16776528 RACE CONDITION CAN CAUSE MYSQLD TO REMOVE SOCKET FILE ERRANTLY · 793b5835
      Anirudh Mangipudi authored
      Problem Description:
      A mysqld_safe instance is started. An InnoDB crash recovery begins which takes
      few seconds to complete. During this crash recovery process happening, another
      mysqld_safe instance is started with the same server startup parameters. Since
      the mysqld's pid file is absent during the crash recovery process the second
      instance assumes there is no other process and tries to acquire a lock on the
      ibdata files in the datadir.  But this step fails and the 2nd instance keeps 
      retrying 100 times each with a delay of 1 second. Now after the 100 attempts, 
      the server goes down, but while going down it hits the mysqld_safe script's 
      cleanup section and without any check it blindly deletes the socket and pid 
      files. Since no lock is placed on the socket file, it gets deleted.
      
      Solution:
      We create a mysqld_safe.pid file in the datadir, which protects the presence 
      server instance resources by storing the mysqld_safe's process id in it. We
      place a check if the mysqld_safe.pid file is existing in the datadir. If yes
      then we check if the pid it contains is an active pid or not. If yes again,
      then the scripts logs an error saying "A mysqld_safe instance is already 
      running". Otherwise it will log the present mysqld_safe's pid into the 
      mysqld_safe.pid file.
      793b5835
    • Anirudh Mangipudi's avatar
      Bug #16776528 RACE CONDITION CAN CAUSE MYSQLD TO REMOVE SOCKET FILE ERRANTLY · 8757f395
      Anirudh Mangipudi authored
      Problem Description:
      A mysqld_safe instance is started. An InnoDB crash recovery begins which takes
      few seconds to complete. During this crash recovery process happening, another
      mysqld_safe instance is started with the same server startup parameters. Since
      the mysqld's pid file is absent during the crash recovery process the second
      instance assumes there is no other process and tries to acquire a lock on the
      ibdata files in the datadir.  But this step fails and the 2nd instance keeps 
      retrying 100 times each with a delay of 1 second. Now after the 100 attempts, 
      the server goes down, but while going down it hits the mysqld_safe script's 
      cleanup section and without any check it blindly deletes the socket and pid 
      files. Since no lock is placed on the socket file, it gets deleted.
      
      Solution:
      We create a mysqld_safe.pid file in the datadir, which protects the presence 
      server instance resources by storing the mysqld_safe's process id in it. We
      place a check if the mysqld_safe.pid file is existing in the datadir. If yes
      then we check if the pid it contains is an active pid or not. If yes again,
      then the scripts logs an error saying "A mysqld_safe instance is already 
      running". Otherwise it will log the present mysqld_safe's pid into the 
      mysqld_safe.pid file.
      8757f395
    • Mattias Jonsson's avatar
      Bug#16860588:CRASH WITH CREATE TABLE ... LIKE .. · c08f20d5
      Mattias Jonsson authored
      AND PARTITION VALUES IN (NULL)
      
      The code assumed there was at least one list element
      in LIST partitioned table.
      
      Fixed by checking the number of list elements.
      c08f20d5
    • Mattias Jonsson's avatar
      Bug#17228383: VALGRIND WARNING IN IBUF_DELETE_REC · 8808c6b3
      Mattias Jonsson authored
      Since the mtr_t struct is marked as invalid in DEBUG_VALGRIND build
      during mtr_commit, checking mtr->inside_ibuf will cause this warning.
      Also since mtr->inside_ibuf cannot be set in mtr_commit (assert check)
      and mtr->state is set to MTR_COMMITTED, the 'ut_ad(!ibuf_inside(&mtr))'
      check is not needed if 'ut_ad(mtr.state == MTR_COMMITTED)' is also
      checked.
      8808c6b3
    • Neeraj Bisht's avatar
      Bug#16614004 - CRASH AFTER READING FREED MEMORY AFTER DOING DDL · 7b099fd9
      Neeraj Bisht authored
              	IN STORED ROUTINE
      
      Inside a loop in a stored procedure, we create a partitioned
      table. The CREATE statement is thus treated as a prepared statement:
      it is prepared once, and then executed by each iteration. Thus its Lex
      is reused many times. This Lex contains a part_info member, which
      describes how the partitions should be laid out, including the
      partitioning function. Each execution of the CREATE does this, in
      open_table_from_share ():
          
             tmp= mysql_unpack_partition(thd, share->partition_info_str,
                                         share->partition_info_str_len,
                                         outparam, is_create_table,
                                         share->default_part_db_type,
                                         &work_part_info_used);
          ...
             tmp= fix_partition_func(thd, outparam, is_create_table);
      The first line calls init_lex_with_single_table() which creates
      a TABLE_LIST, necessary for the "field fixing" which will be
      done by the second line; this is how it is created:
           if ((!(table_ident= new Table_ident(thd,
                                               table->s->db,
                                               table->s->table_name, TRUE))) ||
               (!(table_list= select_lex->add_table_to_list(thd,
                                                            table_ident,
                                                            NULL,
                                                             0))))
                return TRUE;
        it is allocated in the execution memory root.
      Then the partitioning function ("id", stored in Lex -> part_info)
        is fixed, which calls Item_ident:: fix_fields (), which resolves
      "id" to the table_list above, and stores in the item's
      cached_table a pointer to this table_list. 
      The table is created, later it is dropped by another statement,
      then we execute again the prepared CREATE. This reuses the Lex,
      thus also its part_info, thus also the item representing the
      partitioning function (part_info is cloned but it's a shallow
      cloning); CREATE wants to fix the item again (which is
      normal, every execution fixes items again), fix_fields ()
      sees that the cached_table pointer is set and picks up the
      pointed table_list. But this last object does not exist
      anymore (it was allocated in the execution memory root of
      the previous execution, so it has been freed), so we access
      invalid memory.
      
      The solution: when creating the table_list, mark that it
      cannot be cached.
      
      7b099fd9
  8. 08 Aug, 2013 1 commit
  9. 07 Aug, 2013 2 commits
    • unknown's avatar
      No commit message · 77cb5952
      unknown authored
      No commit message
      77cb5952
    • Venkatesh Duggirala's avatar
      Bug#16416302 - CRASH WITH LOSSY RBR REPLICATION · e55d4a88
      Venkatesh Duggirala authored
      OF OLD STYLE DECIMALS
      
      Problem: In RBR, Slave is unable to read row buffer
      properly when the row event contains MYSQL_TYPE_DECIMAL
      (old style decimals) data type column.
      
      Analysis: In RBR, Slave assumes that Master sends
      meta data information for all column types like
      text,blob,varchar,old decimal,new decimal,float,
      and few  other types along with row buffer event.
      But Master is not sending this meta data information
      for old style decimal columns. Hence Slave is crashing
      due to unknown precision value for these column types.
      Master cannot send this precision value to Slave which
      will break replication cross-version compatibility.
      
      Fix: To fix the crash, Slave will now throw error if it
      receives old-style decimal datatype. User should
      consider changing the old-style decimal to new style
      decimal data type by executing "ALTER table modify column"
      query as mentioned in http://dev.mysql.com/
      doc/refman/5.0/en/upgrading-from-previous-series.html.
      e55d4a88
  10. 31 Jul, 2013 3 commits
    • unknown's avatar
      Merge from mysql-5.5.33-release · a5aa74c3
      unknown authored
      a5aa74c3
    • Joao Gramacho's avatar
      Bug#16997513 MY_STRTOLL10 ACCEPTING OVERFLOWED UNSIGNED LONG LONG VALUES AS NORMAL ONES · b79864ae
      Joao Gramacho authored
      Merge from mysql-5.1 into mysql-5.5
      b79864ae
    • Joao Gramacho's avatar
      Bug#16997513 MY_STRTOLL10 ACCEPTING OVERFLOWED UNSIGNED LONG LONG VALUES AS NORMAL ONES · e5a1966b
      Joao Gramacho authored
      Problem:
      =======
      It was detected an incorrect behavior of my_strtoll10 function when 
      converting strings with numbers in the following format:
      "184467440XXXXXXXXXYY"
      
      Where XXXXXXXXX > 737095516 and YY <= 15
      
      Samples of problematic numbers:
      "18446744073709551915"
      "18446744073709552001"
      
      Instead of returning the larger unsigned long long value and setting overflow
      in the returned error code, my_strtoll10 function returns the lower 64-bits 
      of the evaluated number and did not set overflow in the returned error code.
      
      Analysis:
      ========
      Once trying to fix bug 16820156, I've found this bug in the overflow check of
      my_strtoll10 function.
      
      This function, once receiving a string with an integer number larger than
      18446744073709551615 (the larger unsigned long long number) should return the
      larger unsigned long long number and set overflow in the returned error code.
      
      Because of a wrong overflow evaluation, the function didn't catch the
      overflow cases where (i == cutoff) && (j > cutoff2) && (k <= cutoff3). When
      the overflow evaluation fails, the function return the lower 64-bits of the
      evaluated number and do not set overflow in the returned error code.
      
      Fix:
      ===
      Corrected the overflow evaluation in my_strtoll10.
      e5a1966b
  11. 30 Jul, 2013 2 commits
    • prabakaran thirumalai's avatar
      Bug#17083851 BACKPORT BUG#11765744 TO 5.1, 5.5 AND 5.6 · 8412ac00
      prabakaran thirumalai authored
            
      Description:
      Original fix Bug#11765744 changed mutex to read write lock
      to avoid multiple recursive lock acquire operation on 
      LOCK_status mutex.  
      On Windows, locking read-write lock recursively is not safe. 
      Slim read-write locks, which MySQL uses if they are supported by
      Windows version, do not support recursion according to their 
      documentation. For our own implementation of read-write lock, 
      which is used in cases when Windows version doesn't support SRW,
      recursive locking of read-write lock can easily lead to deadlock
      if there are concurrent lock requests.
            
      Fix:  
      This patch reverts the previous fix for bug#11765744 that used
      read-write locks. Instead problem of recursive locking for
      LOCK_status mutex is solved by tracking recursion level using 
      counter in THD object and acquiring lock only once when we enter 
      fill_status() function first time. 
      8412ac00
    • prabakaran thirumalai's avatar
      Bug#17083851 BACKPORT BUG#11765744 TO 5.1, 5.5 AND 5.6 · 592a2b2a
      prabakaran thirumalai authored
      Description:
      Original fix Bug#11765744 changed mutex to read write lock
      to avoid multiple recursive lock acquire operation on 
      LOCK_status mutex.  
      On Windows, locking read-write lock recursively is not safe. 
      Slim read-write locks, which MySQL uses if they are supported by
      Windows version, do not support recursion according to their 
      documentation. For our own implementation of read-write lock, 
      which is used in cases when Windows version doesn't support SRW,
      recursive locking of read-write lock can easily lead to deadlock
      if there are concurrent lock requests.
      
      Fix:  
      This patch reverts the previous fix for bug#11765744 that used
      read-write locks. Instead problem of recursive locking for
      LOCK_status mutex is solved by tracking recursion level using 
      counter in THD object and acquiring lock only once when we enter 
      fill_status() function first time. 
      592a2b2a
  12. 29 Jul, 2013 3 commits
    • Aditya A's avatar
      Bug#13417564 SKIP SLEEP IN SRV_MASTER_THREAD WHEN · ef2e43ce
      Aditya A authored
                   SHUTDOWN IS IN PROGRESS 
      
      [ Null Merge from mysql-5.1]
      ef2e43ce
    • Aditya A's avatar
      Bug#13417564 SKIP SLEEP IN SRV_MASTER_THREAD WHEN · 1c4a3c52
      Aditya A authored
                   SHUTDOWN IS IN PROGRESS 
      
      PROBLEM
      -------
       In the background thread srv_master_thread() we have a 
       a one second delay loop which will continuously monitor
       server activity .If the server is inactive (with out any
       user activity) or in a shutdown state we do some background
       activity like flushing the changes.In the current code
       we are not checking if server is in shutdown state before
       sleeping for one second.
      
      FIX
      ---
      If server is in shutdown state ,then dont go to one second
      sleep. 
      1c4a3c52
    • Aditya A's avatar
      Bug #11766851 QUERYING I_S.PARTITIONS CHANGES THE CARDINALITY OF THE · 21b42b81
      Aditya A authored
                    PARTITIONS.
      
      ANALYSIS
      --------
      Whenever we query I_S.partitions,
      ha_partition::get_dynamic_partition_info()
      is called which resets the cardinality 
      according to the number of rows in last
      partition.
      
      Fix
      ---
      When we call get_dynamic_partition_info() 
      avoid passing the flag HA_STATUS_CONST
      to info() since HA_STATUS_CONST should 
      ideally not be called for per partition.
      
      [Approved by mattiasj rb#2830 ]
      
      21b42b81
  13. 27 Jul, 2013 1 commit
    • Venkatesh Duggirala's avatar
      BUG#16290902 DROP TEMP TABLE IF EXISTS CAN CAUSE POINT · eb152f86
      Venkatesh Duggirala authored
      IN TIME RECOVERY FAILURE ON SLAVES
      
      Problem:
      DROP TEMP TABLE IF EXISTS commands can cause point
      in time recovery (re-applying binlog) failures.
      
      Analyses:
      In RBR, 'DROP TEMPORARY TABLE' commands are
      always binlogged by adding 'IF EXISTS' clauses.
      Also, the slave SQL thread will not check replicate.* filter
      rules for "DROP TEMPORARY TABLE IF EXISTS" queries.
      If log-slave-updates is enabled on slave, these queries
      will be binlogged in the format of "USE `db`;
      DROP TEMPORARY TABLE IF EXISTS `t1`;" irrespective
      of filtering rules and irrespective of the `db` existence.
      When users try to recover slave from it's own binlog,
      use `db` command might fail if `db` is not present on slave.
      
      Fix:
      At the time of writing the 'DROP TEMPORARY TABLE
      IF EXISTS' query into the binlog, 'use `db`' will not be
      present and the table name in the query will be a fully
      qualified table name.
      Eg:
      'USE `db`; DROP TEMPORARY TABLE IF EXISTS `t1`;'
      will be logged as
      'DROP TEMPORARY TABLE IF EXISTS `db`.`t1`;'.
      eb152f86
  14. 25 Jul, 2013 2 commits
  15. 24 Jul, 2013 2 commits
    • Guilhem Bichot's avatar
      Fix for Bug#16614004 CRASH AFTER READING FREED MEMORY AFTER DOING DDL IN STORED ROUTINE · 992a6630
      Guilhem Bichot authored
      Inside a loop in a stored procedure, we create a partitioned
      table. The CREATE statement is thus treated as a prepared statement:
      it is prepared once, and then executed by each iteration. Thus its Lex
      is reused many times. This Lex contains a part_info member, which
      describes how the partitions should be laid out, including the
      partitioning function. Each execution of the CREATE does this, in
      open_table_from_share ():
      
          tmp= mysql_unpack_partition(thd, share->partition_info_str,
                                      share->partition_info_str_len,
                                      outparam, is_create_table,
                                      share->default_part_db_type,
                                      &work_part_info_used);
       ...
            tmp= fix_partition_func(thd, outparam, is_create_table);
      The first line calls init_lex_with_single_table() which creates
      a TABLE_LIST, necessary for the "field fixing" which will be
      done by the second line; this is how it is created:
        if ((!(table_ident= new Table_ident(thd,
                                            table->s->db,
                                            table->s->table_name, TRUE))) ||
            (!(table_list= select_lex->add_table_to_list(thd,
                                                         table_ident,
                                                         NULL,
                                                         0))))
          return TRUE;
      it is allocated in the execution memory root.
      Then the partitioning function ("id", stored in Lex -> part_info)
      is fixed, which calls Item_ident:: fix_fields (), which resolves
      "id" to the table_list above, and stores in the item's
      cached_table a pointer to this table_list. 
      The table is created, later it is dropped by another statement,
      then we execute again the prepared CREATE. This reuses the Lex,
      thus also its part_info, thus also the item representing the
      partitioning function (part_info is cloned but it's a shallow
      cloning); CREATE wants to fix the item again (which is
      normal, every execution fixes items again), fix_fields ()
      sees that the cached_table pointer is set and picks up the
      pointed table_list. But this last object does not exist
      anymore (it was allocated in the execution memory root of
      the previous execution, so it has been freed), so we access
      invalid memory.
      The solution: when creating the table_list, mark that it
      cannot be cached.
      992a6630
    • Praveenkumar Hulakund's avatar
      Bug#16865959 - PLEASE BACKPORT BUG 14749800. · 03940a7b
      Praveenkumar Hulakund authored
      Since log_throttle is not available in 5.5. Logging of
      error message for failure of thread to create new connection
      in "create_thread_to_handle_connection" is not backported.
      
      Since, function "my_plugin_log_message" is not available in 
      5.5 version and since there is incompatibility between
      sql_print_XXX function compiled with g++ and alog files with
      gcc to use sql_print_error, changes related to audit log
      plugin is not backported.
      03940a7b
  16. 23 Jul, 2013 1 commit