1. 15 Nov, 2012 2 commits
    • Marko Mäkelä's avatar
      Bug#15872736 FAILING ASSERTION · e882efe6
      Marko Mäkelä authored
      Remove a bogus debug assertion.
      e882efe6
    • Marko Mäkelä's avatar
      Bug#15874001 CREATE INDEX ON A UTF8 CHAR COLUMN FAILS WITH ROW_FORMAT=REDUNDANT · e5ad4171
      Marko Mäkelä authored
      CHAR(n) in ROW_FORMAT=REDUNDANT tables is always fixed-length
      (n*mbmaxlen bytes), but in the temporary file it is variable-length
      (n*mbminlen to n*mbmaxlen bytes) for variable-length character sets,
      such as UTF-8.
      
      The temporary file format used during index creation and online ALTER
      TABLE is based on ROW_FORMAT=COMPACT. Thus, it should use the
      variable-length encoding even if the base table is in
      ROW_FORMAT=REDUNDNAT.
      
      dtype_get_fixed_size_low(): Replace an assertion-like check with a
      debug assertion.
      
      rec_init_offsets_comp_ordinary(), rec_convert_dtuple_to_rec_comp():
      Make this an inline function.  Replace 'ulint extra' with 'bool temp'.
      
      rec_get_converted_size_comp_prefix_low(): Renamed from
      rec_get_converted_size_comp_prefix(), and made inline. Add the
      parameter 'bool temp'. If temp=true, do not add REC_N_NEW_EXTRA_BYTES.
      
      rec_get_converted_size_comp_prefix(): Remove the comment about
      dict_table_is_comp(). This function is only to be called for other
      than ROW_FORMAT=REDUNDANT records.
      
      rec_get_converted_size_temp(): New function for computing temporary
      file record size. Omit REC_N_NEW_EXTRA_BYTES from the sizes.
      
      rec_init_offsets_temp(), rec_convert_dtuple_to_temp(): New functions,
      for operating on temporary file records.
      
      rb:1559 approved by Jimmy Yang
      e5ad4171
  2. 14 Nov, 2012 2 commits
  3. 12 Nov, 2012 1 commit
  4. 09 Nov, 2012 2 commits
    • Annamalai Gurusami's avatar
      Bug #14669848 CRASH DURING ALTER MAKES ORIGINAL TABLE INACCESSIBLE · 2ad007df
      Annamalai Gurusami authored
      When a new primary key is added to an InnoDB table, then the following
      steps are taken by InnoDB plugin:
      
      .  let t1 be the original table.
      .  a temporary table t1@00231 will be created by cloning t1.
      .  all data will be copied from t1 to t1@00231.
      .  rename t1 to t1@00232.
      .  rename t1@00231 to t1.
      .  drop t1@00232.
      
      The rename and drop operations involve file operations.  But file operations
      cannot be rolled back.  So in row_merge_rename_tables(), just after doing data
      dictionary update and before doing any file operations, generate redo logs
      for file operations and commit the transaction.  This will ensure that any
      crash after this commit, the table is still recoverable by moving .ibd and
      .frm files.  Manual recovery is required.
      
      During recovery, the rename file operation redo logs are processed.
      Previously this was being ignored.
      
      rb://1460 approved by Marko Makela.
      2ad007df
    • Anirudh Mangipudi's avatar
      BUG#11762933: MYSQLDUMP WILL SILENTLY SKIP THE `EVENT` · 14dfe6fc
      Anirudh Mangipudi authored
                    TABLE DATA IF DUMPS MYSQL DATABA
      Problem: If mysqldump is run without --events (or with --skip-events)
      it will not dump the mysql.event table's data. This behaviour is inconsistent
      with that of --routines option, which does not affect the dumping of
      mysql.proc table. According to the Manual, --events (--skip-events) defines,
      if the Event Scheduler events for the dumped databases should be included
      in the mysqldump output and this has nothing to do with the mysql.event table
      itself.
      Solution: A warning has been added when mysqldump is used without --events 
      (or with --skip-events) and a separate patch with the behavioral change 
      will be prepared for 5.6/trunk.
      14dfe6fc
  5. 08 Nov, 2012 2 commits
    • Aditya A's avatar
      Bug#14234028 - CRASH DURING SHUTDOWN WITH BACKGROUND PURGE THREAD · 7a8c93e6
      Aditya A authored
       
       Analysis
       --------- 
       
       my_stat() calls stat() and if the stat() call fails we try to set 
       the variable  my_errno which is actually a thread specific data .
       We try to get the  address of this thread specific data using
       my_pthread_getspecifc(),but for the purge thread we have not defined 
       any thread specific data so it returns null and when dereferencing 
       null we get a segmentation fault.
              init_available_charsets() seen in the core stack is invoked 
       through  pthread_once() .pthread_once is used for one time 
       initialization.Since free_charsets() is called before innodb plugin 
       shutdown ,purge thread calls init_avaliable_charsets() which leads 
       to the crash.
      
       Fix
       ---
       Call free_charsets() after the innodb plugin shutdown,since purge 
       threads are still using the charsets. 
      7a8c93e6
    • Aditya A's avatar
      Bug#11751825 - OPTIMIZE PARTITION RECREATES FULL TABLE INSTEAD JUST PARTITION · 078d7a87
      Aditya A authored
      Follow up patch to address the pb2 failures.
      078d7a87
  6. 07 Nov, 2012 1 commit
    • Venkata Sidagam's avatar
      Bug #11759445: CAN'T DELETE ROWS FROM MEMORY TABLE WITH HASH KEY. · 2226b108
      Venkata Sidagam authored
      Brief description: After insert some rows to MEMORY table with HASH key some 
      rows can't be deleted in one step.    
      
      Problem Analysis/solution: info->current_ptr will have the information about the
      current hash pointer from where we can traverse to the list to get all the       
      remaining tuples.
            
      In hp_delete_key we are updating info->current_ptr with the last_pos based on       
      the flag parameter(which is the keydef and last index are same). As part of the       
      fix we are making it to zero only when the code flow reaches to the end of the       
      function hp_delete_key() it means that the next record which has to get deleted       
      will be at the starting of the list so, that in the next call to       
      read record(heap_rnext()) will take line number 100 path instead of 102 path, 
      please see the below code in file hp_rnext.c, function heap_rnext().
       99       else if (!info->current_ptr)              /* Deleted or first call */
      100         pos= hp_search(info, keyinfo, info->lastkey, 0);
      101       else  
      102         pos= hp_search(info, keyinfo, info->lastkey, 1);
      
      with that change the hp_search() will update the info->current_ptr with the 
      record which needs to be deleted.
      
      storage/heap/hp_delete.c:
        In heap_delete_key() function we are making info->current_ptr to 0 if 
        flag is enabled.
      2226b108
  7. 06 Nov, 2012 1 commit
    • Aditya A's avatar
      Bug#11751825 - OPTIMIZE PARTITION RECREATES FULL TABLE INSTEAD JUST PARTITION · b6d33629
      Aditya A authored
      PROBLEM 
      -------
      
      optimize on partiton will recreate the whole table 
      instead of just partition.
      
      ANALYSIS
      --------
      
      At present innodb doesn't support optimize option ,so we do a rebuild of the 
      whole table and then call analyze() on the table.Presently for any optimize()
      option (on table or partition) we display the following info to the user 
      
      "Table does not support optimize, doing recreate + analyze instead".
      
      FIX
      ---
      
      It was decided for GA versions(5.1 and 5.5) whenever the user tries to 
      optimize a partition(s) we will will display the following info the user
      
      "Table does not support optimize on partitions.
      All partitions will be rebuilt and analyzed."
      
      Earlier partitions were not analyzed.Now all partitions  will be analyzed.  
      
      If the user wants to optimize the whole table ,we will display the
      previous info to the user. i.e
      
      "Table does not support optimize, doing recreate + analyze instead"
      
      For 5.6+ versions we will raise a new bug to support optimize() options
      in innodb.
      
      b6d33629
  8. 05 Nov, 2012 1 commit
  9. 01 Nov, 2012 1 commit
  10. 30 Oct, 2012 2 commits
    • Anirudh Mangipudi's avatar
      BUG#11754894: MYISAMCHK ERROR HAS INCORRECT REFERENCE · 40b8b951
      Anirudh Mangipudi authored
                    TO 'MYISAM_SORT_BUFFER_SIZE'
      Problem: 'myisam_sort_buffer_size' is a parameter used by 
      mysqld program only whereas 'sort_buffer_size' is used by
      mysqld and myisamchk programs. But the error message printed
      when myisamchk program is run with insufficient buffer size 
      is myisam_sort_buffer_size is too small which may mislead to the
      server parameter myisam_sort_buffer_size.
      SOLUTION: A parameter 'myisam_sort_buffer_size' is added as an
      alias for 'sort_buffer_size' and the 'sort_buffer_size' parameter
      is marked as deprecated. So myisamchk also has both the parameters
      with the same role.
      40b8b951
    • Shivji Kumar Jha's avatar
      BUG#14659685 - main.mysqlbinlog_row_myisam and · 7c7de142
      Shivji Kumar Jha authored
                     main.mysqlbinlog_row_innodb are skipped by mtr
      
      === Problem ===
      
      The following tests are wrongly placed in main suite and as a
      result these are not run with proper binlog format combinations.
      Some are always skipped by mtr.
      1) mysqlbinlog_row_myisam
      2) mysqlbinlog_row_innodb
      3) mysqlbinlog_row.test
      4) mysqlbinlog_row_trans.test
      5) mysqlbinlog-cp932
      6) mysqlbinlog2
      7) mysqlbinlog_base64
      
      === Background ===
      
      mtr runs the tests placed in main suite with binlog format=stmt.
      Those that need to be tested against binlog format=row or mixed
      or more than one binlog format and require only one mysql server
      are placed in binlog suite. mtr runs tests in binlog suite with
      all three binlog formats(stmt,row and mixed).
      
      === Fix ===
      
      
      1) Moved the test listed in problem section above to binlog suite.
      2) Added prefix "binlog_" to the name of each test case moved.
         Renamed the coresponding result files and option files accordingly. 
      
      
      mysql-test/extra/binlog_tests/mysqlbinlog_row_engine.inc:
        include file for mysqlbinlog_row_myisam.test and 
        mysqlbinlog_row_myisam.test which are being moved to
        binlog suite.
      mysql-test/suite/binlog/r/binlog_mysqlbinlog-cp932.result:
        result file for mysqlbinlog-cp932.test which is being moved to
        binlog suite.
      mysql-test/suite/binlog/r/binlog_mysqlbinlog2.result:
        result file for mysqlbinlog2.test which is being moved to
        binlog suite.
      mysql-test/suite/binlog/r/binlog_mysqlbinlog_base64.result:
        result file for mysqlbinlog_base64.test which is being moved to
        binlog suite.
      mysql-test/suite/binlog/r/binlog_mysqlbinlog_row.result:
        result file for mysqlbinlog_row.test which is being moved to
        binlog suite.
      mysql-test/suite/binlog/r/binlog_mysqlbinlog_row_innodb.result:
        result file for mysqlbinlog_row_innodb.test which is being moved to
        binlog suite.
      mysql-test/suite/binlog/r/binlog_mysqlbinlog_row_myisam.result:
        result file for mysqlbinlog_row_myisam.test which is being moved to
        binlog suite.
      mysql-test/suite/binlog/r/binlog_mysqlbinlog_row_trans.result:
        result file for mysqlbinlog_row_trans.test which is being moved to
        binlog suite.
      mysql-test/suite/binlog/t/binlog_mysqlbinlog-cp932-master.opt:
        option file for mysqlbinlog-cp932.test which is being moved to
        binlog suite.
      mysql-test/suite/binlog/t/binlog_mysqlbinlog-cp932.test:
        the test requires binlog format=stmt or mixed. Since, it was placed in
        main suite earlier, it was only run with binlog format=stmt, and hence
        this test was never run with binlog format=mixed.
      mysql-test/suite/binlog/t/binlog_mysqlbinlog2.test:
        the test requires binlog format=stmt or mixed. Since, it was placed in
        main suite earlier, it was only run with binlog format=stmt, and hence
        this test was never run with binlog format=mixed.
      mysql-test/suite/binlog/t/binlog_mysqlbinlog_base64.test:
        the test requires binlog format=row. Since, it was placed in main
        suite earlier, it was only run with binlog format=stmt, and hence
        this test was always skipped by mtr.
      mysql-test/suite/binlog/t/binlog_mysqlbinlog_row.test:
        the test requires binlog format=row. Since, it was placed in main
        suite earlier, it was only run with binlog format=stmt, and hence
        this test was always skipped by mtr.
      mysql-test/suite/binlog/t/binlog_mysqlbinlog_row_innodb.test:
        the test requires binlog format=row. Since, it was placed in main
        suite earlier, it was only run with binlog format=stmt, and hence
        this test was always skipped by mtr.
      mysql-test/suite/binlog/t/binlog_mysqlbinlog_row_myisam.test:
        the test requires binlog format=row. Since, it was placed in main
        suite earlier, it was only run with binlog format=stmt, and hence
        this test was always skipped by mtr.
      mysql-test/suite/binlog/t/binlog_mysqlbinlog_row_trans.test:
        the test requires binlog format=row. Since, it was placed in main
        suite earlier, it was only run with binlog format=stmt, and hence
        this test was always skipped by mtr.
      7c7de142
  11. 29 Oct, 2012 2 commits
  12. 22 Oct, 2012 1 commit
    • Marko Mäkelä's avatar
      Backport from 5.6: Bug#14769820 ASSERT FLEN == LEN · d13554b1
      Marko Mäkelä authored
      IN ALTER TABLE ... ADD UNIQUE KEY
      
      A bogus debug assertion failure occurred when reporting a duplicate
      key on a column prefix of a CHAR column.
      
      This is a regression from Bug#14729221 IN-PLACE ALTER TABLE REPORTS ''
      INSTEAD OF REAL DUPLICATE VALUE FOR PREFIX KEYS. The assertion is only
      present when UNIV_DEBUG is defined (which it is in debug builds
      starting from MySQL 5.5). It is a case of overasserting.
      
      Fix approved by Inaam Rana on IM.
      d13554b1
  13. 21 Oct, 2012 2 commits
  14. 19 Oct, 2012 1 commit
  15. 18 Oct, 2012 2 commits
    • Neeraj Bisht's avatar
      Bug#13726751 - 8 BYTE MEMORY LEAK IN DO_SAVE_BLOB · eef1a195
      Neeraj Bisht authored
      Problem:-
      When we execute a query which has subquery with GROUP BY, ORDER BY and have a
      BLOB column,results a memory leak.
      
      Analysis:-
      In case of subquery, which have GROUP BY on BLOB and a ORDER BY on other field
      and BLOB is not a key. We allocate a tmp buffer to copy_field to take care of
      BLOB value.This copy_field value can have copies of its in two join(objects),
      so while freeing this copy_field we have to take care that it is
      not deleted twice.
      The double deletion of tmp_table_param.copy_field is handled by two patches.
      
      One by Kostja :
      revid:sp1r-konstantin@mysql.com-20050627101056-55153
      Fix the broken test suite in -debug build.
      
      and other by Oleksandr
      revid:sp1r-bell@sanja.is.com.ua-20060118114857-19905
      Excluded posibility of tmp_table_param.copy_field double deletion (BUG#14851).
      
      both of this patches are commited in different branch and while
      merging they both get placed,but there is no need for Kostja patch as Oleksandr
      patch handle this.
      
      
      sql/sql_select.cc:
        Bug13726751, tmp_join clean up is not necessary as later in the code we are taking care of cleaning up of tmp_join copy_field.
      eef1a195
    • Marko Mäkelä's avatar
      Bug#14758405: ALTER TABLE: ADDING SERIAL NULL DATATYPE: ASSERTION: · 52ea1522
      Marko Mäkelä authored
      LEN <= SIZEOF(ULONGLONG)
      
      This bug was caught in the WL#6255 ALTER TABLE...ADD COLUMN in MySQL
      5.6, but there is a bug in all InnoDB versions that support
      auto-increment columns.
      
      row_search_autoinc_read_column(): When reading the maximum value of
      the auto-increment column, and the column only contains NULL values,
      return 0. This corresponds to the case when the table is empty in
      row_search_max_autoinc().
      
      rb:1415 approved by Sunny Bains
      52ea1522
  16. 17 Oct, 2012 4 commits
  17. 16 Oct, 2012 2 commits
    • Neeraj Bisht's avatar
      Bug#11745891 - LAST_INSERT(ID) DOES NOT SUPPORT BIGINT UNSIGNED · bdb4104c
      Neeraj Bisht authored
      Problem:-
      using last_insert_id() on an auto_incremented bigint unsigned does
      not work for values which are greater than max-bigint-signed.
      
      Analysis:-
      last_insert_id() returns the first auto_incremented value for a column
      and an auto_incremented value can have only positive values.
      
      In our code, when we are initializing a last_insert_id object, we are
      taking it as a signed BIGINT, So when the auto_incremented value reaches
      greater than max signed bigint, last_insert_id gives negative result.
      
      Solution:
      When we are fetching the value from last_insert_id, We are setting the 
      unsigned_flag, so that it take only unsigned BIGINT value.
      
      sql/item_func.cc:
        here unsigned value is converted to signed value.
      sql/item_func.h:
        last_insert_id() gives an auto_incremented value which can be
        positive only,so defined it as a unsigned longlong sets the
        unsigned_flag to 1.
      bdb4104c
    • Marko Mäkelä's avatar
      Bug#14729221 IN-PLACE ALTER TABLE REPORTS '' INSTEAD OF · aec62476
      Marko Mäkelä authored
      REAL DUPLICATE VALUE FOR PREFIX KEYS
      
      innobase_rec_to_mysql(): Invoke dict_index_get_nth_col_or_prefix_pos()
      instead of dict_index_get_nth_col_pos() to find the column.
      aec62476
  18. 15 Oct, 2012 1 commit
    • Krunal Bauskar krunal.bauskar@oracle.com's avatar
      · 9bfc910f
      bug#14704286
      SECONDARY INDEX UPDATES MAKE CONSISTENT READS DO O(N^2) UNDO PAGE
      LOOKUPS (honoring kill query while accessing sec_index)
      
      If secondary index is being used for select query evaluation and this
      query is operating with consistent read snapshot it might take good time for
      secondary index to return back control to mysql as MVCC would kick in.
      
      If user issues "kill query <id>" while query is actively accessing
      secondary index it will not be honored as there is no hook to check
      for this condition. Added hook for this check.
      
      -----
      Parallely secondary index taking too long to evaluate for consistent
      read snapshot case is being examined for performance improvement. WL#6540.
      9bfc910f
  19. 12 Oct, 2012 2 commits
    • Marc Alff's avatar
      Bug#14629232 SECURITY VULNERABILITY WITH SHOW PROFILE · fc1fbe15
      Marc Alff authored
      This fix resolves a security vulnerability of SHOW PROFILE.
      
      See the bug report for details.
      fc1fbe15
    • Nuno Carvalho's avatar
      BUG#14629727: USER_VAR_EVENT IS MISSING RANGE CHECKS · f1d3b0f1
      Nuno Carvalho authored
      This bug had two problems:
       P1) Reads out of bounds;
       P2) Writes out of bounds.
      
      PROBLEM P1
      ----------
      User_var_log_event unmarshalling from binlog was not performing range
      checks when using name_len and val_len variables to walk on event
      buffer.
      
      Added range checks to User_var_log_event unmarshalling to prevent
      unmarshalling errors.
      
      PROBLEM P2
      ----------
      User_var_log_event value was allocated on thread stack, what caused
      stack frame errors when User_var_log_event value was bigger than thread
      stack size.
      
      Currently value is allocated on heap memory.
      f1d3b0f1
  20. 10 Oct, 2012 1 commit
  21. 09 Oct, 2012 3 commits
  22. 08 Oct, 2012 1 commit
    • Marko Mäkelä's avatar
      Bug#14731482 UPDATE OR DELETE CORRUPTS A RECORD WITH A LONG PRIMARY KEY · b0662086
      Marko Mäkelä authored
      We did not allocate enough bits for index->trx_id_offset, causing an
      UPDATE or DELETE of a table with a PRIMARY KEY longer than 1024 bytes
      to corrupt the PRIMARY KEY.
      
      dict_index_t: Allocate enough bits.
      
      dict_index_build_internal_clust(): Check for overflow of
      index->trx_id_offset. Trip a debug assertion when overflow occurs.
      
      rb:1380 approved by Jimmy Yang
      b0662086
  23. 01 Oct, 2012 2 commits
  24. 28 Sep, 2012 1 commit
    • Annamalai Gurusami's avatar
      Bug #13249921 ASSERT !BPAGE->FILE_PAGE_WAS_FREED, USUALLY IN · b59a64e2
      Annamalai Gurusami authored
      TRANSACTION ROLLBACK
      
      Description:  During the rollback operation, a blob page 
      is removed earlier than desired.  Consider following scenario:
      
      1. create table t1(a int primary key,b blob) engine=innodb;
      2. insert into t1 values (1,repeat('b',9000));
      3. begin;
      4. update t1 set b=concat(b,'b');
      5. update t1 set a=a+1;
      6. insert into t1 values (1,repeat('b',9000));
      7. rollback;
      
      The update operation in line 5 produces 2 undo log record. The first
      undo record (TRX_UNDO_DEL_MARK_REC) goes to trx->update_undo and the
      second undo record (TRX_UNDO_INSERT_REC) goes to trx->insert_undo.
      During rollback, they are executed out of order.
      
      When the undo record TRX_UNDO_DEL_MARK_REC is applied/executed,
      the blob ownership is also reset.  Because of this the blob page
      is released earlier than desired.  This blob page must have been
      freed only as part of applying/executing the undo record
      TRX_UNDO_INSERT_REC.
      
      This problem can be avoided by executing the undo records in
      order.  This patch will make innodb to execute the undo records
      in order.
      
      rb://1125 approved by Marko.
      b59a64e2