1. 23 Aug, 2013 7 commits
    • Praveenkumar Hulakund's avatar
    • unknown's avatar
      No commit message · 7ab5bcc1
      unknown authored
      No commit message
      7ab5bcc1
    • Neeraj Bisht's avatar
      Bug#17029399 - CRASH IN ITEM_REF::FIX_FIELDS WITH TRIGGER ERRORS · 6773c60f
      Neeraj Bisht authored
      Problem:-
      In a Procedure, when we are comparing value of select query 
      with IN clause and they both have different collation, cause 
      error on first time execution and assert second time.
      procedure will have query like
      set @x = ((select a from t1) in (select d from t2));<---proc1
                    sel1                   sel2
      
      Analysis:-
      When we execute this proc1(first time)
      While resolving the fields of user variable, we will call 
      Item_in_subselect::fix_fields while will resolve sel2. There 
      in Item_in_subselect::select_transformer, we evaluate the 
      left expression(sel1) and store it in Item_cache_* object 
      (to avoid re-evaluating it many times during subquery execution) 
      by making Item_in_optimizer class.
      While evaluating left expression we will prepare sel1.
      After that, we will put a new condition in sel2  
      in Item_in_subselect::select_transformer() which will compare 
      t2.d and sel1(which is cached in Item_in_optimizer).
      
      Later while checking the collation in agg_item_collations() 
      we get error and we cleanup the item. While cleaning up we cleaned 
      the cached value in Item_in_optimizer object.
      
      When we execute the procedure second time, we have condition for 
      sel2 and while setup_cond(), we can't able to find reference item 
      as it is cleanup while item cleanup.So it assert.
      
      
      Solution:-
      We should not cleanup the cached value for Item_in_optimizer object, 
      if we have put the condition to subselect.
      
      6773c60f
    • Neeraj Bisht's avatar
      Bug#17029399 - CRASH IN ITEM_REF::FIX_FIELDS WITH TRIGGER ERRORS · 356b6414
      Neeraj Bisht authored
      Problem:-
      In a Procedure, when we are comparing value of select query 
      with IN clause and they both have different collation, cause 
      error on first time execution and assert second time.
      procedure will have query like
      set @x = ((select a from t1) in (select d from t2));<---proc1
                    sel1                   sel2
      
      Analysis:-
      When we execute this proc1(first time)
      While resolving the fields of user variable, we will call 
      Item_in_subselect::fix_fields while will resolve sel2. There 
      in Item_in_subselect::select_transformer, we evaluate the 
      left expression(sel1) and store it in Item_cache_* object 
      (to avoid re-evaluating it many times during subquery execution) 
      by making Item_in_optimizer class.
      While evaluating left expression we will prepare sel1.
      After that, we will put a new condition in sel2  
      in Item_in_subselect::select_transformer() which will compare 
      t2.d and sel1(which is cached in Item_in_optimizer).
      
      Later while checking the collation in agg_item_collations() 
      we get error and we cleanup the item. While cleaning up we cleaned 
      the cached value in Item_in_optimizer object.
      
      When we execute the procedure second time, we have condition for 
      sel2 and while setup_cond(), we can't able to find reference item 
      as it is cleanup while item cleanup.So it assert.
      
      
      Solution:-
      We should not cleanup the cached value for Item_in_optimizer object, 
      if we have put the condition to subselect.
      
      
      356b6414
    • unknown's avatar
      No commit message · 5d75a4e6
      unknown authored
      No commit message
      5d75a4e6
    • unknown's avatar
      No commit message · d6825f49
      unknown authored
      No commit message
      d6825f49
    • Ashish Agarwal's avatar
      WL#7076: Backporting wl6715 to support both formats · 292aa926
      Ashish Agarwal authored
               in 5.5, 5.6, 5.7.
      292aa926
  2. 22 Aug, 2013 2 commits
  3. 21 Aug, 2013 9 commits
  4. 20 Aug, 2013 3 commits
    • Balasubramanian Kandasamy's avatar
      Reverted Release version · fcc00114
      Balasubramanian Kandasamy authored
      fcc00114
    • Balasubramanian Kandasamy's avatar
      bebc9ae5
    • Dmitry Lenev's avatar
      Fix for bug#14188793 - "DEADLOCK CAUSED BY ALTER TABLE DOEN'T CLEAR · fc2c6692
      Dmitry Lenev authored
      STATUS OF ROLLBACKED TRANSACTION" and bug #17054007 - "TRANSACTION
      IS NOT FULLY ROLLED BACK IN CASE OF INNODB DEADLOCK".
      
      The problem in the first bug report was that although deadlock involving
      metadata locks was reported using the same error code and message as InnoDB
      deadlock it didn't rollback transaction like the latter. This caused
      confusion to users as in some cases after ER_LOCK_DEADLOCK transaction
      could have been restarted immediately and in some cases rollback was
      required.
      
      The problem in the second bug report was that although InnoDB deadlock
      caused transaction rollback in all storage engines it didn't cause release
      of metadata locks. So concurrent DDL on the tables used in transaction was
      blocked until implicit or explicit COMMIT or ROLLBACK was issued in the
      connection which got InnoDB deadlock.
      
      The former issue has stemmed from the fact that when support for detection
      and reporting metadata locks deadlocks was added we erroneously assumed
      that InnoDB doesn't rollback transaction on deadlock but only last statement
      (while this is what happens on InnoDB lock timeout actually) and so didn't
      implement rollback of transactions on MDL deadlocks.
      
      The latter issue was caused by the fact that rollback of transaction due
      to deadlock is carried out by setting THD::transaction_rollback_request
      flag at the point where deadlock is detected and performing rollback
      inside of trans_rollback_stmt() call when this flag is set. And
      trans_rollback_stmt() is not aware of MDL locks, so no MDL locks are
      released.
      
      This patch solves these two problems in the following way:
      
      - In case when MDL deadlock is detect transaction rollback is requested
        by setting THD::transaction_rollback_request flag.
      
      - Code performing rollback of transaction if THD::transaction_rollback_request
        is moved out from trans_rollback_stmt(). Now we handle rollback request
        on the same level as we call trans_rollback_stmt() and release statement/
        transaction MDL locks.
      fc2c6692
  5. 19 Aug, 2013 1 commit
  6. 16 Aug, 2013 4 commits
    • Balasubramanian Kandasamy's avatar
      dummy commit · 1e904e36
      Balasubramanian Kandasamy authored
      1e904e36
    • Balasubramanian Kandasamy's avatar
      f1e23d73
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to mysql-5.5. · 553988a2
      Marko Mäkelä authored
      553988a2
    • Marko Mäkelä's avatar
      Bug#17312846 CHECK TABLE ASSERTION FAILURE · fb2a2d25
      Marko Mäkelä authored
      DICT_TABLE_GET_FORMAT(CLUST_INDEX->TABLE) >= 1
      
      The function row_sel_sec_rec_is_for_clust_rec() was incorrectly
      preparing to compare a NULL column prefix in a secondary index with a
      non-NULL column in a clustered index.
      
      This can trigger an assertion failure in 5.1 plugin and later. In the
      built-in InnoDB of MySQL 5.1 and earlier, we would apparently only do
      some extra work, by trimming the clustered index field for the
      comparison.
      
      The code might actually have worked properly apart from this debug
      assertion failure. It is merely doing some extra work in fetching a
      BLOB column, and then comparing it to NULL (which would return the
      same result, no matter what the BLOB contents is).
      
      While the test case involves CHECK TABLE, this could theoretically
      occur during any read that uses a secondary index on a column prefix
      of a column that can be NULL.
      
      rb#3101 approved by Mattias Jonsson
      fb2a2d25
  7. 15 Aug, 2013 2 commits
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to mysql-5.5. · 49ffda09
      Marko Mäkelä authored
      49ffda09
    • Marko Mäkelä's avatar
      Bug#17302896 DOUBLE PURGE ON ROLLBACK OF UPDATING A DELETE-MARKED RECORD · 31809607
      Marko Mäkelä authored
      There was a race condition in the rollback of TRX_UNDO_UPD_DEL_REC.
      
      Once row_undo_mod_clust() has rolled back the changes by the rolling-back
      transaction, it attempts to purge the delete-marked record, if possible, in a
      separate mini-transaction.
      
      However, row_undo_mod_remove_clust_low() fails to check if the DB_TRX_ID of
      the record that it found after repositioning the cursor, is still the same.
      If it is not, it means that the record was purged and another record was
      inserted in its place.
      
      So, the rollback would have performed an incorrect purge, breaking the
      locking rules and causing corruption.
      
      The problem was found by creating a table that contains a unique
      secondary index and a primary key, and two threads running REPLACE
      with only one value for the unique column, so that the uniqueness
      constraint would be violated all the time, leading to statement
      rollback.
      
      This bug exists in all InnoDB versions (I checked MySQL 3.23.53).
      It has become easier to repeat in 5.5 and 5.6 thanks to scalability
      improvements and a dedicated purge thread.
      
      rb#3085 approved by Jimmy Yang
      31809607
  8. 14 Aug, 2013 2 commits
  9. 12 Aug, 2013 5 commits
    • Anirudh Mangipudi's avatar
      Bug #16776528 RACE CONDITION CAN CAUSE MYSQLD TO REMOVE SOCKET FILE ERRANTLY · 793b5835
      Anirudh Mangipudi authored
      Problem Description:
      A mysqld_safe instance is started. An InnoDB crash recovery begins which takes
      few seconds to complete. During this crash recovery process happening, another
      mysqld_safe instance is started with the same server startup parameters. Since
      the mysqld's pid file is absent during the crash recovery process the second
      instance assumes there is no other process and tries to acquire a lock on the
      ibdata files in the datadir.  But this step fails and the 2nd instance keeps 
      retrying 100 times each with a delay of 1 second. Now after the 100 attempts, 
      the server goes down, but while going down it hits the mysqld_safe script's 
      cleanup section and without any check it blindly deletes the socket and pid 
      files. Since no lock is placed on the socket file, it gets deleted.
      
      Solution:
      We create a mysqld_safe.pid file in the datadir, which protects the presence 
      server instance resources by storing the mysqld_safe's process id in it. We
      place a check if the mysqld_safe.pid file is existing in the datadir. If yes
      then we check if the pid it contains is an active pid or not. If yes again,
      then the scripts logs an error saying "A mysqld_safe instance is already 
      running". Otherwise it will log the present mysqld_safe's pid into the 
      mysqld_safe.pid file.
      793b5835
    • Anirudh Mangipudi's avatar
      Bug #16776528 RACE CONDITION CAN CAUSE MYSQLD TO REMOVE SOCKET FILE ERRANTLY · 8757f395
      Anirudh Mangipudi authored
      Problem Description:
      A mysqld_safe instance is started. An InnoDB crash recovery begins which takes
      few seconds to complete. During this crash recovery process happening, another
      mysqld_safe instance is started with the same server startup parameters. Since
      the mysqld's pid file is absent during the crash recovery process the second
      instance assumes there is no other process and tries to acquire a lock on the
      ibdata files in the datadir.  But this step fails and the 2nd instance keeps 
      retrying 100 times each with a delay of 1 second. Now after the 100 attempts, 
      the server goes down, but while going down it hits the mysqld_safe script's 
      cleanup section and without any check it blindly deletes the socket and pid 
      files. Since no lock is placed on the socket file, it gets deleted.
      
      Solution:
      We create a mysqld_safe.pid file in the datadir, which protects the presence 
      server instance resources by storing the mysqld_safe's process id in it. We
      place a check if the mysqld_safe.pid file is existing in the datadir. If yes
      then we check if the pid it contains is an active pid or not. If yes again,
      then the scripts logs an error saying "A mysqld_safe instance is already 
      running". Otherwise it will log the present mysqld_safe's pid into the 
      mysqld_safe.pid file.
      8757f395
    • Mattias Jonsson's avatar
      Bug#16860588:CRASH WITH CREATE TABLE ... LIKE .. · c08f20d5
      Mattias Jonsson authored
      AND PARTITION VALUES IN (NULL)
      
      The code assumed there was at least one list element
      in LIST partitioned table.
      
      Fixed by checking the number of list elements.
      c08f20d5
    • Mattias Jonsson's avatar
      Bug#17228383: VALGRIND WARNING IN IBUF_DELETE_REC · 8808c6b3
      Mattias Jonsson authored
      Since the mtr_t struct is marked as invalid in DEBUG_VALGRIND build
      during mtr_commit, checking mtr->inside_ibuf will cause this warning.
      Also since mtr->inside_ibuf cannot be set in mtr_commit (assert check)
      and mtr->state is set to MTR_COMMITTED, the 'ut_ad(!ibuf_inside(&mtr))'
      check is not needed if 'ut_ad(mtr.state == MTR_COMMITTED)' is also
      checked.
      8808c6b3
    • Neeraj Bisht's avatar
      Bug#16614004 - CRASH AFTER READING FREED MEMORY AFTER DOING DDL · 7b099fd9
      Neeraj Bisht authored
              	IN STORED ROUTINE
      
      Inside a loop in a stored procedure, we create a partitioned
      table. The CREATE statement is thus treated as a prepared statement:
      it is prepared once, and then executed by each iteration. Thus its Lex
      is reused many times. This Lex contains a part_info member, which
      describes how the partitions should be laid out, including the
      partitioning function. Each execution of the CREATE does this, in
      open_table_from_share ():
          
             tmp= mysql_unpack_partition(thd, share->partition_info_str,
                                         share->partition_info_str_len,
                                         outparam, is_create_table,
                                         share->default_part_db_type,
                                         &work_part_info_used);
          ...
             tmp= fix_partition_func(thd, outparam, is_create_table);
      The first line calls init_lex_with_single_table() which creates
      a TABLE_LIST, necessary for the "field fixing" which will be
      done by the second line; this is how it is created:
           if ((!(table_ident= new Table_ident(thd,
                                               table->s->db,
                                               table->s->table_name, TRUE))) ||
               (!(table_list= select_lex->add_table_to_list(thd,
                                                            table_ident,
                                                            NULL,
                                                             0))))
                return TRUE;
        it is allocated in the execution memory root.
      Then the partitioning function ("id", stored in Lex -> part_info)
        is fixed, which calls Item_ident:: fix_fields (), which resolves
      "id" to the table_list above, and stores in the item's
      cached_table a pointer to this table_list. 
      The table is created, later it is dropped by another statement,
      then we execute again the prepared CREATE. This reuses the Lex,
      thus also its part_info, thus also the item representing the
      partitioning function (part_info is cloned but it's a shallow
      cloning); CREATE wants to fix the item again (which is
      normal, every execution fixes items again), fix_fields ()
      sees that the cached_table pointer is set and picks up the
      pointed table_list. But this last object does not exist
      anymore (it was allocated in the execution memory root of
      the previous execution, so it has been freed), so we access
      invalid memory.
      
      The solution: when creating the table_list, mark that it
      cannot be cached.
      
      7b099fd9
  10. 08 Aug, 2013 1 commit
  11. 07 Aug, 2013 2 commits
    • unknown's avatar
      No commit message · 77cb5952
      unknown authored
      No commit message
      77cb5952
    • Venkatesh Duggirala's avatar
      Bug#16416302 - CRASH WITH LOSSY RBR REPLICATION · e55d4a88
      Venkatesh Duggirala authored
      OF OLD STYLE DECIMALS
      
      Problem: In RBR, Slave is unable to read row buffer
      properly when the row event contains MYSQL_TYPE_DECIMAL
      (old style decimals) data type column.
      
      Analysis: In RBR, Slave assumes that Master sends
      meta data information for all column types like
      text,blob,varchar,old decimal,new decimal,float,
      and few  other types along with row buffer event.
      But Master is not sending this meta data information
      for old style decimal columns. Hence Slave is crashing
      due to unknown precision value for these column types.
      Master cannot send this precision value to Slave which
      will break replication cross-version compatibility.
      
      Fix: To fix the crash, Slave will now throw error if it
      receives old-style decimal datatype. User should
      consider changing the old-style decimal to new style
      decimal data type by executing "ALTER table modify column"
      query as mentioned in http://dev.mysql.com/
      doc/refman/5.0/en/upgrading-from-previous-series.html.
      e55d4a88
  12. 31 Jul, 2013 2 commits