1. 18 Oct, 2012 4 commits
    • Neeraj Bisht's avatar
      Bug#13726751 - 8 BYTE MEMORY LEAK IN DO_SAVE_BLOB · 28f3153a
      Neeraj Bisht authored
      Problem:-
      When we execute a query which has subquery with GROUP BY, ORDER BY and have a
      BLOB column,results a memory leak.
      
      Analysis:-
      In case of subquery, which have GROUP BY on BLOB and a ORDER BY on other field
      and BLOB is not a key. We allocate a tmp buffer to copy_field to take care of
      BLOB value.This copy_field value can have copies of its in two join(objects),
      so while freeing this copy_field we have to take care that it is
      not deleted twice.
      The double deletion of tmp_table_param.copy_field is handled by two patches.
      
      One by Kostja :
      revid:sp1r-konstantin@mysql.com-20050627101056-55153
      Fix the broken test suite in -debug build.
      
      and other by Oleksandr
      revid:sp1r-bell@sanja.is.com.ua-20060118114857-19905
      Excluded posibility of tmp_table_param.copy_field double deletion (BUG#14851).
      
      both of this patches are commited in different branch and while
      merging they both get placed,but there is no need for Kostja patch as Oleksandr
      patch handle this.
      
      
      sql/sql_select.cc:
        Bug13726751, tmp_join clean up is not necessary as later in the code we are taking care of cleaning up of tmp_join copy_field.
      28f3153a
    • Neeraj Bisht's avatar
      Bug#13726751 - 8 BYTE MEMORY LEAK IN DO_SAVE_BLOB · 68df7278
      Neeraj Bisht authored
      Problem:-
      When we execute a query which has subquery with GROUP BY, ORDER BY and have a
      BLOB column,results a memory leak.
      
      Analysis:-
      In case of subquery, which have GROUP BY on BLOB and a ORDER BY on other field
      and BLOB is not a key. We allocate a tmp buffer to copy_field to take care of
      BLOB value.This copy_field value can have copies of its in two join(objects),
      so while freeing this copy_field we have to take care that it is
      not deleted twice.
      The double deletion of tmp_table_param.copy_field is handled by two patches.
      
      One by Kostja :
      revid:sp1r-konstantin@mysql.com-20050627101056-55153
      Fix the broken test suite in -debug build.
      
      and other by Oleksandr
      revid:sp1r-bell@sanja.is.com.ua-20060118114857-19905
      Excluded posibility of tmp_table_param.copy_field double deletion (BUG#14851).
      
      both of this patches are commited in different branch and while
      merging they both get placed,but there is no need for Kostja patch as Oleksandr
      patch handle this.
      
      
      sql/sql_select.cc:
        Bug13726751, tmp_join clean up is not necessary as later in the code we are taking care of cleaning up of tmp_join copy_field.
      68df7278
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to mysql-5.5. · 7713fe64
      Marko Mäkelä authored
      7713fe64
    • Marko Mäkelä's avatar
      Bug#14758405: ALTER TABLE: ADDING SERIAL NULL DATATYPE: ASSERTION: · dd0610e1
      Marko Mäkelä authored
      LEN <= SIZEOF(ULONGLONG)
      
      This bug was caught in the WL#6255 ALTER TABLE...ADD COLUMN in MySQL
      5.6, but there is a bug in all InnoDB versions that support
      auto-increment columns.
      
      row_search_autoinc_read_column(): When reading the maximum value of
      the auto-increment column, and the column only contains NULL values,
      return 0. This corresponds to the case when the table is empty in
      row_search_max_autoinc().
      
      rb:1415 approved by Sunny Bains
      dd0610e1
  2. 17 Oct, 2012 9 commits
  3. 16 Oct, 2012 5 commits
    • Neeraj Bisht's avatar
      Bug#11745891 - LAST_INSERT(ID) DOES NOT SUPPORT BIGINT UNSIGNED · fb60e4b0
      Neeraj Bisht authored
      Problem:-
      using last_insert_id() on an auto_incremented bigint unsigned does
      not work for values which are greater than max-bigint-signed.
      
      Analysis:-
      last_insert_id() returns the first auto_incremented value for a column
      and an auto_incremented value can have only positive values.
      
      In our code, when we are initializing a last_insert_id object, we are
      taking it as a signed BIGINT, So when the auto_incremented value reaches
      greater than max signed bigint, last_insert_id gives negative result.
      
      Solution:
      When we are fetching the value from last_insert_id, We are setting the 
      unsigned_flag, so that it take only unsigned BIGINT value.
      
      sql/item_func.cc:
        here unsigned value is converted to signed value.
      sql/item_func.h:
        last_insert_id() gives an auto_incremented value which can be
        positive only,so defined it as a unsigned longlong sets the
        unsigned_flag to 1.
      fb60e4b0
    • Neeraj Bisht's avatar
      Bug#11745891 - LAST_INSERT(ID) DOES NOT SUPPORT BIGINT UNSIGNED · d29fb392
      Neeraj Bisht authored
      Problem:-
      using last_insert_id() on an auto_incremented bigint unsigned does
      not work for values which are greater than max-bigint-signed.
      
      Analysis:-
      last_insert_id() returns the first auto_incremented value for a column
      and an auto_incremented value can have only positive values.
      
      In our code, when we are initializing a last_insert_id object, we are
      taking it as a signed BIGINT, So when the auto_incremented value reaches
      greater than max signed bigint, last_insert_id gives negative result.
      
      Solution:
      When we are fetching the value from last_insert_id, We are setting the 
      unsigned_flag, so that it take only unsigned BIGINT value.
      
      sql/item_func.cc:
        here unsigned value is converted to signed value.
      sql/item_func.h:
        last_insert_id() gives an auto_incremented value which can be
        positive only,so defined it as a unsigned longlong sets the
        unsigned_flag to 1.
      d29fb392
    • unknown's avatar
      No commit message · 260bc4ba
      unknown authored
      No commit message
      260bc4ba
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to mysql-5.5. · a04f290b
      Marko Mäkelä authored
      a04f290b
    • Marko Mäkelä's avatar
      Bug#14729221 IN-PLACE ALTER TABLE REPORTS '' INSTEAD OF · 20e1d3c6
      Marko Mäkelä authored
      REAL DUPLICATE VALUE FOR PREFIX KEYS
      
      innobase_rec_to_mysql(): Invoke dict_index_get_nth_col_or_prefix_pos()
      instead of dict_index_get_nth_col_pos() to find the column.
      20e1d3c6
  4. 15 Oct, 2012 3 commits
    • Krunal Bauskar krunal.bauskar@oracle.com's avatar
      removed warning message as they have changed in mysql-5.6 and mysql-trunk and... · dba286c9
      removed warning message as they have changed in mysql-5.6 and mysql-trunk and this is left over from changes that got up-merged 
      dba286c9
    • Krunal Bauskar krunal.bauskar@oracle.com's avatar
      · 9f41245a
      bug#14704286
      SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE
      LOOKUPS (honoring kill query while accessing sec_index)
      
      If secondary index is being used for select query evaluation and this
      query is operating with consistent read snapshot it might take good time for
      secondary index to return back control to mysql as MVCC would kick in.
      
      If user issues "kill query <id>" while query is actively accessing
      secondary index it will not be honored as there is no hook to check
      for this condition. Added hook for this check.
      
      -----
      Parallely secondary index taking too long to evaluate for consistent
      read snapshot case is being examined for performance improvement. WL#6540.
      9f41245a
    • Krunal Bauskar krunal.bauskar@oracle.com's avatar
      · 5156605e
      bug#14704286
      SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE
      LOOKUPS (honoring kill query while accessing sec_index)
      
      If secondary index is being used for select query evaluation and this
      query is operating with consistent read snapshot it might take good time for
      secondary index to return back control to mysql as MVCC would kick in.
      
      If user issues "kill query <id>" while query is actively accessing
      secondary index it will not be honored as there is no hook to check
      for this condition. Added hook for this check.
      
      -----
      Parallely secondary index taking too long to evaluate for consistent
      read snapshot case is being examined for performance improvement. WL#6540.
      5156605e
  5. 12 Oct, 2012 2 commits
  6. 10 Oct, 2012 2 commits
  7. 09 Oct, 2012 6 commits
  8. 08 Oct, 2012 4 commits
    • Praveenkumar Hulakund's avatar
      Bug#11756600 - SLAVE THREAD CAN CRASH IF EVENT SCHEDULER · b238ee9a
      Praveenkumar Hulakund authored
                     FAILS TO READ EVENT TABLE AT STARTUP.
      
      This issue is fixed in 5.5+ versions. This patch adds a test
      case for this scenario.
      b238ee9a
    • Annamalai Gurusami's avatar
      Bug #14036214 MYSQLD CRASHES WHEN EXECUTING UPDATE IN TRX WITH · 97591cf1
      Annamalai Gurusami authored
      CONSISTENT SNAPSHOT OPTION
      
      A transaction is started with a consistent snapshot.  After 
      the transaction is started new indexes are added to the 
      table.  Now when we issue an update statement, the optimizer
      chooses an index.  When the index scan is being initialized
      via ha_innobase::change_active_index(), InnoDB reports 
      the error code HA_ERR_TABLE_DEF_CHANGED, with message 
      stating that "insufficient history for index".
      
      This error message is propagated up to the SQL layer.  But
      the my_error() api is never called.  The statement level
      diagnostics area is not updated with the correct error 
      status (it remains in Diagnostics_area::DA_EMPTY).  
      
      Hence the following check in the Protocol::end_statement()
      fails.
      
       516   case Diagnostics_area::DA_EMPTY:
       517   default:
       518     DBUG_ASSERT(0);
       519     error= send_ok(thd->server_status, 0, 0, 0, NULL);
       520     break;
      
      The fix is to backport the fix of bugs 14365043, 11761652 
      and 11746399. 
      
      14365043 PROTOCOL::END_STATEMENT(): ASSERTION `0' FAILED
      11761652 HA_RND_INIT() RESULT CODE NOT CHECKED
      11746399 RETURN VALUES OF HA_INDEX_INIT() AND INDEX_INIT() IGNORED
      
      rb://1227 approved by guilhem and mattiasj.
      97591cf1
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to mysql-5.5. · 2460c07b
      Marko Mäkelä authored
      Also, add debug check for trx_id sanity to row_upd_rec_sys_fields().
      2460c07b
    • Marko Mäkelä's avatar
      Bug#14731482 UPDATE OR DELETE CORRUPTS A RECORD WITH A LONG PRIMARY KEY · 0f762b48
      Marko Mäkelä authored
      We did not allocate enough bits for index->trx_id_offset, causing an
      UPDATE or DELETE of a table with a PRIMARY KEY longer than 1024 bytes
      to corrupt the PRIMARY KEY.
      
      dict_index_t: Allocate enough bits.
      
      dict_index_build_internal_clust(): Check for overflow of
      index->trx_id_offset. Trip a debug assertion when overflow occurs.
      
      rb:1380 approved by Jimmy Yang
      0f762b48
  9. 04 Oct, 2012 1 commit
    • Jon Olav Hauglid's avatar
      Bug#14640599 MEMORY LEAK WHEN EXECUTING STORED ROUTINE EXCEPTION HANDLER · a17ea605
      Jon Olav Hauglid authored
      When a SP handler is activated, memory is allocated to hold the
      MESSAGE_TEXT for the condition that caused the activation.
      
      The problem was that this memory was allocated on the MEM_ROOT belonging
      to the stored program. Since this MEM_ROOT is not freed until the
      stored program ends, a stored program that causes lots of handler
      activations can start using lots of memory. In 5.1 and earlier the
      problem did not exist as no MESSAGE_TEXT was allocated if a condition
      was raised with a handler present. However, this behavior lead to
      a number of other issues such as Bug#23032.
      
      This patch fixes the problem by allocating enough memory for the
      necessary MESSAGE_TEXTs in the SP MEM_ROOT when the SP starts and
      then re-using this memory each time a handler is activated.
            
      This is the 5.5 version of the patch.
      a17ea605
  10. 03 Oct, 2012 2 commits
    • Tor Didriksen's avatar
      Bug#13713525 CREATE_INITIAL_DB.CMAKE IS FAILING ON WINDOWS, STILL "DEVENV" RETURNS 0 · 004e51df
      Tor Didriksen authored
      This bug depends on cmake version.
      
      For cmake 2.6 (which is still in use for some pushbuild trees)
      the main build would succeed, even if create_initial_db failed.
      
      The problem was the chaining of commands in the CUSTOM_COMMAND
      to produce 'initdb.dep'. It first invokes cmake to run mysqld,
      then invokes 'touch' to create the file. Moving the 'touch'
      command makes the error propagate properly for both cmake 2.6 and 2.8
      
      004e51df
    • Jon Olav Hauglid's avatar
      Bug#14495351: CRASH IN HA_PARTITION::HANDLE_UNORDERED_NEXT · 99eb2ac4
      Jon Olav Hauglid authored
      Follow-up patch - Fix broken build:
      error: format ‘%u’ expects argument of type ‘unsigned int’,
      but argument 2 has type ‘key_part_map {aka long unsigned int}’
      [-Werror=format]
      99eb2ac4
  11. 01 Oct, 2012 2 commits