1. 28 Aug, 2013 3 commits
    • Raghav Kapoor's avatar
      BUG#17294150-POTENTIAL CRASH DUE TO BUFFER OVERRUN IN SSL · efb6a1d0
      Raghav Kapoor authored
                   ERROR HANDLING CODE 
      
      BACKGROUND:
      There can be a potential crash due to buffer overrun in 
      SSL error handling code due to missing comma in
      ssl_error_string[] array in viosslfactories.c.
      
      ANALYSIS:
      Found by code Inspection.
      
      FIX:
      Added the missing comma in SSL error handling code
      in ssl_error_string[] array in viosslfactories.c.
      efb6a1d0
    • Raghav Kapoor's avatar
      BUG#17294150-POTENTIAL CRASH DUE TO BUFFER OVERRUN IN SSL · c53cad81
      Raghav Kapoor authored
                   ERROR HANDLING CODE 
      
      BACKGROUND:
      There can be a potential crash due to buffer overrun in 
      SSL error handling code due to missing comma in
      ssl_error_string[] array in viosslfactories.c.
      
      ANALYSIS:
      Found by code Inspection.
      
      FIX:
      Added the missing comma in SSL error handling code
      in ssl_error_string[] array in viosslfactories.c.
      c53cad81
    • Neeraj Bisht's avatar
      Bug#16346241 - SERVER CRASH IN ITEM_PARAM::QUERY_VAL_STR · d4b4c827
      Neeraj Bisht authored
      Problem:-
      Second execution of prepared statement for query with 
      parameter in limit clause, causes an assert when using 
      connectors (e.g., Connector C).  
      
      
      Analysis:-
      In prepared statement, LIMIT parameters can be
      specified using '?' markers. Value for the parameter can
      be supplied while executing the prepared statement.
      
      Passing string, float or double values for LIMIT clause
      works well from command-line client. That's because, while 
      setting the LIMIT parameter value from a user-variable,
      the value is converted to integer value.
      
      However, when prepared statement is executed from other
      interfaces as J connectors, or C applications etc,
      the value for the parameters are sent to the server
      with execute command. Each item in command has value and
      the data TYPE. So, while setting parameter values
      from this log, value is set to all the parameters
      with the same data type as passed.
      Here, we have the logic to convert the value to change the 
      state and item_type if it is part of LIMIT parameter and 
      its item_type is not INT.
      But when we reset this parameter we save the item_type but change 
      state. So on second execution we have old item_type but our state 
      has been changed, which make us to use string type variable 
      in Item_param::query_str_val(). This cause an assert.
      
      Fix:
      Instead of checking the item_type of the parameter, check for 
      the state of the parameter. As state value are reset everytime
      we execute the statement.
      d4b4c827
  2. 27 Aug, 2013 1 commit
  3. 26 Aug, 2013 3 commits
    • Hery Ramilison's avatar
      Empty version change upmerge · c8cc4fc7
      Hery Ramilison authored
      c8cc4fc7
    • hery.ramilison@oracle.com's avatar
      242c82b6
    • Dmitry Lenev's avatar
      Fix for bug #17356954 "CANNOT USE SAVEPOINTS AFTER ER_LOCK_DEADLOCK OR · 45824782
      Dmitry Lenev authored
      ER_LOCK_WAIT_TIMEOUT".
      
      The problem was that after changes caused by fix bug 14188793 "DEADLOCK
      CAUSED BY ALTER TABLE DOEN'T CLEAR STATUS OF ROLLBACKED TRANSACTION"/
      bug 17054007 "TRANSACTION IS NOT FULLY ROLLED BACK IN CASE OF INNODB
      DEADLOCK implicit rollback of transaction which occurred on ER_LOCK_DEADLOCK
      (and ER_LOCK_WAIT_TIMEOUT if innodb_rollback_on_timeout option was set)
      didn't start new transaction in @@autocommit=1 mode.
      
      Such behavior although consistent with behavior of explicit ROLLBACK has
      broken expectations of users and backward compatibility assumptions.
      
      This patch fixes problem by reverting to starting new transaction
      in 5.5/5.6.
      
      The plan is to keep new behavior in trunk so the code change from this
      patch is to be null-merged there.
      45824782
  4. 23 Aug, 2013 8 commits
    • Praveenkumar Hulakund's avatar
      Bug#11765252 - READ OF FREED MEMORY WHEN "USE DB" AND · 45daf55a
      Praveenkumar Hulakund authored
                     "SHOW PROCESSLIST"
      
      Follow up path, addressing pb2 test failure.
      45daf55a
    • Praveenkumar Hulakund's avatar
    • mysql-builder@oracle.com's avatar
      No commit message · d8d64f90
      mysql-builder@oracle.com authored
      No commit message
      d8d64f90
    • Neeraj Bisht's avatar
      Bug#17029399 - CRASH IN ITEM_REF::FIX_FIELDS WITH TRIGGER ERRORS · 0cf9f5e7
      Neeraj Bisht authored
      Problem:-
      In a Procedure, when we are comparing value of select query 
      with IN clause and they both have different collation, cause 
      error on first time execution and assert second time.
      procedure will have query like
      set @x = ((select a from t1) in (select d from t2));<---proc1
                    sel1                   sel2
      
      Analysis:-
      When we execute this proc1(first time)
      While resolving the fields of user variable, we will call 
      Item_in_subselect::fix_fields while will resolve sel2. There 
      in Item_in_subselect::select_transformer, we evaluate the 
      left expression(sel1) and store it in Item_cache_* object 
      (to avoid re-evaluating it many times during subquery execution) 
      by making Item_in_optimizer class.
      While evaluating left expression we will prepare sel1.
      After that, we will put a new condition in sel2  
      in Item_in_subselect::select_transformer() which will compare 
      t2.d and sel1(which is cached in Item_in_optimizer).
      
      Later while checking the collation in agg_item_collations() 
      we get error and we cleanup the item. While cleaning up we cleaned 
      the cached value in Item_in_optimizer object.
      
      When we execute the procedure second time, we have condition for 
      sel2 and while setup_cond(), we can't able to find reference item 
      as it is cleanup while item cleanup.So it assert.
      
      
      Solution:-
      We should not cleanup the cached value for Item_in_optimizer object, 
      if we have put the condition to subselect.
      0cf9f5e7
    • Neeraj Bisht's avatar
      Bug#17029399 - CRASH IN ITEM_REF::FIX_FIELDS WITH TRIGGER ERRORS · 4f0e7c03
      Neeraj Bisht authored
      Problem:-
      In a Procedure, when we are comparing value of select query 
      with IN clause and they both have different collation, cause 
      error on first time execution and assert second time.
      procedure will have query like
      set @x = ((select a from t1) in (select d from t2));<---proc1
                    sel1                   sel2
      
      Analysis:-
      When we execute this proc1(first time)
      While resolving the fields of user variable, we will call 
      Item_in_subselect::fix_fields while will resolve sel2. There 
      in Item_in_subselect::select_transformer, we evaluate the 
      left expression(sel1) and store it in Item_cache_* object 
      (to avoid re-evaluating it many times during subquery execution) 
      by making Item_in_optimizer class.
      While evaluating left expression we will prepare sel1.
      After that, we will put a new condition in sel2  
      in Item_in_subselect::select_transformer() which will compare 
      t2.d and sel1(which is cached in Item_in_optimizer).
      
      Later while checking the collation in agg_item_collations() 
      we get error and we cleanup the item. While cleaning up we cleaned 
      the cached value in Item_in_optimizer object.
      
      When we execute the procedure second time, we have condition for 
      sel2 and while setup_cond(), we can't able to find reference item 
      as it is cleanup while item cleanup.So it assert.
      
      
      Solution:-
      We should not cleanup the cached value for Item_in_optimizer object, 
      if we have put the condition to subselect.
      
      4f0e7c03
    • mysql-builder@oracle.com's avatar
      No commit message · 29342a3c
      mysql-builder@oracle.com authored
      No commit message
      29342a3c
    • mysql-builder@oracle.com's avatar
      No commit message · 5e2694ea
      mysql-builder@oracle.com authored
      No commit message
      5e2694ea
    • Ashish Agarwal's avatar
      WL#7076: Backporting wl6715 to support both formats · d75c58e1
      Ashish Agarwal authored
               in 5.5, 5.6, 5.7.
      d75c58e1
  5. 22 Aug, 2013 2 commits
  6. 21 Aug, 2013 9 commits
  7. 20 Aug, 2013 3 commits
    • Balasubramanian Kandasamy's avatar
      Reverted Release version · 198f3b46
      Balasubramanian Kandasamy authored
      198f3b46
    • Balasubramanian Kandasamy's avatar
      9f4b580d
    • Dmitry Lenev's avatar
      Fix for bug#14188793 - "DEADLOCK CAUSED BY ALTER TABLE DOEN'T CLEAR · b07ec61f
      Dmitry Lenev authored
      STATUS OF ROLLBACKED TRANSACTION" and bug #17054007 - "TRANSACTION
      IS NOT FULLY ROLLED BACK IN CASE OF INNODB DEADLOCK".
      
      The problem in the first bug report was that although deadlock involving
      metadata locks was reported using the same error code and message as InnoDB
      deadlock it didn't rollback transaction like the latter. This caused
      confusion to users as in some cases after ER_LOCK_DEADLOCK transaction
      could have been restarted immediately and in some cases rollback was
      required.
      
      The problem in the second bug report was that although InnoDB deadlock
      caused transaction rollback in all storage engines it didn't cause release
      of metadata locks. So concurrent DDL on the tables used in transaction was
      blocked until implicit or explicit COMMIT or ROLLBACK was issued in the
      connection which got InnoDB deadlock.
      
      The former issue has stemmed from the fact that when support for detection
      and reporting metadata locks deadlocks was added we erroneously assumed
      that InnoDB doesn't rollback transaction on deadlock but only last statement
      (while this is what happens on InnoDB lock timeout actually) and so didn't
      implement rollback of transactions on MDL deadlocks.
      
      The latter issue was caused by the fact that rollback of transaction due
      to deadlock is carried out by setting THD::transaction_rollback_request
      flag at the point where deadlock is detected and performing rollback
      inside of trans_rollback_stmt() call when this flag is set. And
      trans_rollback_stmt() is not aware of MDL locks, so no MDL locks are
      released.
      
      This patch solves these two problems in the following way:
      
      - In case when MDL deadlock is detect transaction rollback is requested
        by setting THD::transaction_rollback_request flag.
      
      - Code performing rollback of transaction if THD::transaction_rollback_request
        is moved out from trans_rollback_stmt(). Now we handle rollback request
        on the same level as we call trans_rollback_stmt() and release statement/
        transaction MDL locks.
      b07ec61f
  8. 19 Aug, 2013 1 commit
  9. 16 Aug, 2013 4 commits
    • Balasubramanian Kandasamy's avatar
      dummy commit · a645ecc9
      Balasubramanian Kandasamy authored
      a645ecc9
    • Balasubramanian Kandasamy's avatar
      7e003829
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to mysql-5.5. · 780babc0
      Marko Mäkelä authored
      780babc0
    • Marko Mäkelä's avatar
      Bug#17312846 CHECK TABLE ASSERTION FAILURE · 55129f67
      Marko Mäkelä authored
      DICT_TABLE_GET_FORMAT(CLUST_INDEX->TABLE) >= 1
      
      The function row_sel_sec_rec_is_for_clust_rec() was incorrectly
      preparing to compare a NULL column prefix in a secondary index with a
      non-NULL column in a clustered index.
      
      This can trigger an assertion failure in 5.1 plugin and later. In the
      built-in InnoDB of MySQL 5.1 and earlier, we would apparently only do
      some extra work, by trimming the clustered index field for the
      comparison.
      
      The code might actually have worked properly apart from this debug
      assertion failure. It is merely doing some extra work in fetching a
      BLOB column, and then comparing it to NULL (which would return the
      same result, no matter what the BLOB contents is).
      
      While the test case involves CHECK TABLE, this could theoretically
      occur during any read that uses a secondary index on a column prefix
      of a column that can be NULL.
      
      rb#3101 approved by Mattias Jonsson
      55129f67
  10. 15 Aug, 2013 2 commits
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to mysql-5.5. · dfed175c
      Marko Mäkelä authored
      dfed175c
    • Marko Mäkelä's avatar
      Bug#17302896 DOUBLE PURGE ON ROLLBACK OF UPDATING A DELETE-MARKED RECORD · 5163c4a1
      Marko Mäkelä authored
      There was a race condition in the rollback of TRX_UNDO_UPD_DEL_REC.
      
      Once row_undo_mod_clust() has rolled back the changes by the rolling-back
      transaction, it attempts to purge the delete-marked record, if possible, in a
      separate mini-transaction.
      
      However, row_undo_mod_remove_clust_low() fails to check if the DB_TRX_ID of
      the record that it found after repositioning the cursor, is still the same.
      If it is not, it means that the record was purged and another record was
      inserted in its place.
      
      So, the rollback would have performed an incorrect purge, breaking the
      locking rules and causing corruption.
      
      The problem was found by creating a table that contains a unique
      secondary index and a primary key, and two threads running REPLACE
      with only one value for the unique column, so that the uniqueness
      constraint would be violated all the time, leading to statement
      rollback.
      
      This bug exists in all InnoDB versions (I checked MySQL 3.23.53).
      It has become easier to repeat in 5.5 and 5.6 thanks to scalability
      improvements and a dedicated purge thread.
      
      rb#3085 approved by Jimmy Yang
      5163c4a1
  11. 14 Aug, 2013 2 commits
  12. 12 Aug, 2013 2 commits
    • Anirudh Mangipudi's avatar
      Bug #16776528 RACE CONDITION CAN CAUSE MYSQLD TO REMOVE SOCKET FILE ERRANTLY · 638dcdc3
      Anirudh Mangipudi authored
      Problem Description:
      A mysqld_safe instance is started. An InnoDB crash recovery begins which takes
      few seconds to complete. During this crash recovery process happening, another
      mysqld_safe instance is started with the same server startup parameters. Since
      the mysqld's pid file is absent during the crash recovery process the second
      instance assumes there is no other process and tries to acquire a lock on the
      ibdata files in the datadir.  But this step fails and the 2nd instance keeps 
      retrying 100 times each with a delay of 1 second. Now after the 100 attempts, 
      the server goes down, but while going down it hits the mysqld_safe script's 
      cleanup section and without any check it blindly deletes the socket and pid 
      files. Since no lock is placed on the socket file, it gets deleted.
      
      Solution:
      We create a mysqld_safe.pid file in the datadir, which protects the presence 
      server instance resources by storing the mysqld_safe's process id in it. We
      place a check if the mysqld_safe.pid file is existing in the datadir. If yes
      then we check if the pid it contains is an active pid or not. If yes again,
      then the scripts logs an error saying "A mysqld_safe instance is already 
      running". Otherwise it will log the present mysqld_safe's pid into the 
      mysqld_safe.pid file.
      638dcdc3
    • Anirudh Mangipudi's avatar
      Bug #16776528 RACE CONDITION CAN CAUSE MYSQLD TO REMOVE SOCKET FILE ERRANTLY · 8977c8fa
      Anirudh Mangipudi authored
      Problem Description:
      A mysqld_safe instance is started. An InnoDB crash recovery begins which takes
      few seconds to complete. During this crash recovery process happening, another
      mysqld_safe instance is started with the same server startup parameters. Since
      the mysqld's pid file is absent during the crash recovery process the second
      instance assumes there is no other process and tries to acquire a lock on the
      ibdata files in the datadir.  But this step fails and the 2nd instance keeps 
      retrying 100 times each with a delay of 1 second. Now after the 100 attempts, 
      the server goes down, but while going down it hits the mysqld_safe script's 
      cleanup section and without any check it blindly deletes the socket and pid 
      files. Since no lock is placed on the socket file, it gets deleted.
      
      Solution:
      We create a mysqld_safe.pid file in the datadir, which protects the presence 
      server instance resources by storing the mysqld_safe's process id in it. We
      place a check if the mysqld_safe.pid file is existing in the datadir. If yes
      then we check if the pid it contains is an active pid or not. If yes again,
      then the scripts logs an error saying "A mysqld_safe instance is already 
      running". Otherwise it will log the present mysqld_safe's pid into the 
      mysqld_safe.pid file.
      8977c8fa