1. 01 Oct, 2012 2 commits
  2. 28 Sep, 2012 1 commit
    • Annamalai Gurusami's avatar
      Bug #13249921 ASSERT !BPAGE->FILE_PAGE_WAS_FREED, USUALLY IN · c9e3a834
      Annamalai Gurusami authored
      TRANSACTION ROLLBACK
      
      Description:  During the rollback operation, a blob page 
      is removed earlier than desired.  Consider following scenario:
      
      1. create table t1(a int primary key,b blob) engine=innodb;
      2. insert into t1 values (1,repeat('b',9000));
      3. begin;
      4. update t1 set b=concat(b,'b');
      5. update t1 set a=a+1;
      6. insert into t1 values (1,repeat('b',9000));
      7. rollback;
      
      The update operation in line 5 produces 2 undo log record. The first
      undo record (TRX_UNDO_DEL_MARK_REC) goes to trx->update_undo and the
      second undo record (TRX_UNDO_INSERT_REC) goes to trx->insert_undo.
      During rollback, they are executed out of order.
      
      When the undo record TRX_UNDO_DEL_MARK_REC is applied/executed,
      the blob ownership is also reset.  Because of this the blob page
      is released earlier than desired.  This blob page must have been
      freed only as part of applying/executing the undo record
      TRX_UNDO_INSERT_REC.
      
      This problem can be avoided by executing the undo records in
      order.  This patch will make innodb to execute the undo records
      in order.
      
      rb://1125 approved by Marko.
      c9e3a834
  3. 26 Sep, 2012 2 commits
    • unknown's avatar
      No commit message · d1006884
      unknown authored
      No commit message
      d1006884
    • Akhila Maddukuri's avatar
      Description: · 2bfcfcb4
      Akhila Maddukuri authored
      ```--------
      After compiling from source, during make test I got the following error:
      
      test main.loaddata failed with error
      CURRENT_TEST: main.loaddata
      mysqltest: At line 592: query 'LOAD DATA INFILE 'tmpp.txt' INTO TABLE t1
      CHARACTER SET ucs2
      (@b) SET a=REVERSE(@b)' failed: 1115: Unknown character set: 'ucs2'
      
      I noticed other tests are skipped because of no ucs2
      main.mix2_myisam_ucs2                    [ skipped ]  Test requires:'
      have_ucs2'
      
      Should main.loaddata be skipped if there is no ucs2
      
      How To Repeat:
      ```
      
      ----------
      Run make test on compiled source that doesn't have ucs2
      
      Suggested fix:
      -------------
      the failing piece of the test should be moved from mysql-test/t/loaddata.test to
      mysql-test/t/ctype_ucs.test.
      2bfcfcb4
  4. 25 Sep, 2012 5 commits
    • Tor Didriksen's avatar
      Backport · b7169e68
      Tor Didriksen authored
      Bug #11764313 57135: CRASH IN ITEM_FUNC_CASE::FIND_ITEM WITH CASE WHEN
      Bug #11764818 57692: Crash in item_func_in::val_int() with ZEROFILL
      b7169e68
    • unknown's avatar
      No commit message · 4631687d
      unknown authored
      No commit message
      4631687d
    • unknown's avatar
      No commit message · 153d2468
      unknown authored
      No commit message
      153d2468
    • Jon Olav Hauglid's avatar
      Bug#14621627 THREAD CACHE IS UNFAIR · cbe38f3a
      Jon Olav Hauglid authored
      When a client connects to a MySQL server, first a THD object is created.
      If there are any idle server threads waiting, the THD object is then added
      to a list and a server thread is woken up. This thread then retrieves the 
      THD object from the list and starts executing.
      
      The problem was that this list of THD objects waiting for a server thread,
      was not working in a FIFO fashion, but rather LIFO. This is unfair, as it means
      that the last THD added (=last client connected) will be assigned a  server 
      thread first.
      
      Note however that for this to be a problem, several clients must be able
      to connect and have THD objects constructed before any server threads
      manages to be woken up. This is not a very likely scenario.
      
      This patch fixes the problem by changing the THD list to work FIFO
      rather than LIFO.
      
      This is the 5.1/5.5 version of the patch.
      cbe38f3a
    • Raghav Kapoor's avatar
      BUG#13864642: DROP/CREATE USER BEHAVING ODDLY · d82962d5
      Raghav Kapoor authored
      BACKGROUND:
      In certain situations DROP USER fails to remove all privileges
      belonging to user being dropped from in-memory structures.
      Current workaround is to do DROP USER twice in scenario below
      OR doing FLUSH PRIVILEGES after doing DROP USER.
      
      ANALYSIS:
      In MySQL, When we grant some stored routines privileges to a
      user they are stored in their respective hash.
      When doing DROP USER all the stored routine privilege entries
      associated with that user has to be deleted from its respective 
      hash.
      The root cause for this bug is some entries from the hash
      are not getting deleted. 
      The problem is that code that deletes entries from the hash tries
      to do so while iterating over it, without taking enough measures
      to address the fact that such deletion can reshuffle elements in 
      the hash. If the user/administrator creates the same user again 
      he is thrown an  error 'Error 1396 ER_CANNOT_USER' from MySQL.
      This prompts the user to either do FLUSH PRIVILEGES or do DROP USER 
      again. This behaviour is not desirable as it is a workaround and
      does not solves the problem mentioned above.
      
      FIX:
      This bug is fixed by introducing a dynamic array to store the 
      pointersto all stored routine privilege objects that either have
      to be deleted or updated. This is done in 3 steps.
      Step 1: Fetching the element from the hash and checking whether 
      it is to be deleted or updated.
      Step 2: Storing the pointer to that privilege object in dynamic array.
      Step 3: Traversing the dynamic array to perform the appropriate action 
      either delete or update.
      This is a much cleaner way to delete or update the privilege entries 
      associated with some user and solves the problem mentioned above.
      Also the code has been refactored a bit by introducing an enum
      instead of hard coded numbers used for respective dynamic arrays 
      and hashes in handle_grant_struct() function.
      d82962d5
  5. 23 Sep, 2012 1 commit
  6. 22 Sep, 2012 1 commit
    • Rohit Kalhans's avatar
      BUG#14548159: NUMEROUS CASES OF INCORRECT IDENTIFIER · 2d7fa159
      Rohit Kalhans authored
      QUOTING IN REPLICATION 
      
      Problem: Misquoting or unquoted identifiers may lead to
      incorrect statements to be logged to the binary log.
      
      Fix: we use specialized functions to append quoted identifiers in
      the statements generated by the server.
      2d7fa159
  7. 21 Sep, 2012 1 commit
    • Nirbhay Choubey's avatar
      Bug#14645196 MYSQL CLIENT'S USE COMMAND FAILS · 221ba6c4
      Nirbhay Choubey authored
      WHEN DBNAME CONTAINS MULTIPLE QUOTES
      
      MySQL client's USE command might fail if the
      database name contains multiple quotes (backticks).
      
      The reason behind the failure being the method
      that client uses to remove/escape the quotes
      while parsing the USE command's option (dbname),
      where the option parsing might terminate if a
      matching quote is found.
      
      Also, C-APIs like mysql_select_db() expect a
      normalized dbname. Now, in certain cases, client
      might fail to normalize dbname similar to that of
      server and hence mysql_select_db() would fail.
      
      Fixed by getting the normalized dbname (indirectly)
      from the server by directly sending the "USE dbanme"
      as query to the server followed by a "SELECT DATABASE()".
      The above steps are only performed if number of quotes
      in the dbname is greater than 2. Once the normalized
      dbname is received, the original db is restored.
      221ba6c4
  8. 20 Sep, 2012 1 commit
  9. 19 Sep, 2012 1 commit
    • Marko Mäkelä's avatar
      Bug#14636528 INNODB CHANGE BUFFERING IS NOT ENTIRELY CRASH-SAFE · b3e0fa54
      Marko Mäkelä authored
      Delete-mark change buffer records when resorting to a pessimistic
      delete from the change buffer B-tree. Skip delete-marked records in
      the change buffer merge and when estimating whether an operation can
      be buffered. Without this fix, we could try to apply the same buffered
      changes multiple times if the server was killed at the right moment.
      
      In MySQL 5.5 and later: ibuf_get_volume_buffered_count_func(): Ignore
      delete-marked (already processed) records.
      
      ibuf_delete_rec(): Add a crash point before optimistic delete. If the
      optimistic delete fails, flag the record processed before
      mtr_commit().
      
      ibuf_merge_or_delete_for_page(): Ignore delete-marked (already
      processed) records.
      
      Backport to 5.1: Rename btr_cur_del_unmark_for_ibuf() to
      btr_cur_set_deleted_flag_for_ibuf() and add a parameter.
      
      rb:1307 approved by Jimmy Yang
      b3e0fa54
  10. 17 Sep, 2012 4 commits
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to working copy. · 45d56fc0
      Marko Mäkelä authored
      45d56fc0
    • Harin Vadodaria's avatar
      Bug#11753779: MAX_CONNECT_ERRORS WORKS ONLY WHEN 1ST · b03ed386
      Harin Vadodaria authored
                    INC_HOST_ERRORS() IS CALLED.
      
      Issue       : Sequence of calling inc_host_errors()
                    and reset_host_errors() required some
                    changes in order to maintain correct
                    connection error count.
      
      Solution    : Call to reset_host_errors() is shifted
                    to a location after which no calls to
                    inc_host_errors() are made.
      b03ed386
    • Marko Mäkelä's avatar
      Bug#12701488 ASSERT PAGE_ZIP_VALIDATE, UNIV_ZIP_DEBUG · cff9c64b
      Marko Mäkelä authored
      page_zip_validate(), page_zip_validate_low(): Add a parameter for the
      B-tree index.
      
      page_zip_validate_low(): If the page contents does not match, check
      that the record link chains match. Furthermore, if dict_index_t is
      passed, check that the records match. (This reduces coverage a bit: if
      index=NULL, we will ignore differences in record contents, that is,
      the page payload.)
      
      rb:1264 approved by Inaam Rana
      cff9c64b
    • Sujatha Sivakumar's avatar
      Bug#11750014:ASSERTION TRX_DATA->EMPTY() IN BINLOG_CLOSE_CONNECTION · cf642d27
      Sujatha Sivakumar authored
      Problem:
      =======
      
      trx_data->empty() assert happens at `binlog_close_connection'
      
      Analysis:
      ========
      
      trx_data->empty() function checks for no pending events
      and the transaction cache to be empty.This function returns
      "true" if no pending events are present and cache is empty.
      Otherwise it returns false. `binlog_close_connection' call
      expects the above function to return true. But if the
      return value is false then assert is raised.
      
      This bug was reproducible in a diskfull scenario. In this
      disk full scenario try to do an insert operation so that
      a new pending event is created and flushing this pending
      event fails. Due to this failure the server goes down
      and invokes `binlog_close_connection' for clean closure.
      Since the pending event still remains the assert is caused.
      This assert is caused only in non transactional databases.
      
      
      Fix:
      ===
      
      In a disk full scenario when the insertion fails the
      transaction is rolled back and `binlog_end_trans`
      is called to flush the pending events. But flush operation
      fails as the disk is full and the function simply returns
      `1' without taking any action to delete the pending event.
      
      This leaves the event to remain till the closure of
      connection.  `delete pending' statement has been added to 
      do the required clean up action.
      
      sql/log.cc:
        Added "delete pending" statement to clean pending event
      cf642d27
  11. 12 Sep, 2012 2 commits
  12. 11 Sep, 2012 1 commit
  13. 10 Sep, 2012 2 commits
    • Andrei Elkin's avatar
      merge bug14597605 to the main repo. · f876fac1
      Andrei Elkin authored
      f876fac1
    • Andrei Elkin's avatar
      Bug#14597605 Issue with Null-value user on slave · 3896f010
      Andrei Elkin authored
      An "orthographic" typo in User_var::set_deferred() was made in fixes for
      bug@14275000. While editing the signature of the initial patch to remove
      the only argument, the assigned value of the argument remained in the body ... 
      to be successfully compiled (!) thanks to names coincidence:
      the arg to User_var method and its member.
      
      Fixed with correcting the typo.
      3896f010
  14. 07 Sep, 2012 1 commit
  15. 05 Sep, 2012 1 commit
  16. 03 Sep, 2012 1 commit
  17. 31 Aug, 2012 2 commits
    • Annamalai Gurusami's avatar
      Bug #13453036 ERROR CODE 1118: ROW SIZE TOO LARGE - EVEN · e5817934
      Annamalai Gurusami authored
      THOUGH IT IS NOT.
      
      The following error message is misleading because it claims 
      that the BLOB space is not counted.  
      
      "ERROR 1118 (42000): Row size too large. The maximum row size for 
      the used table type, not counting BLOBs, is 8126. You have to 
      change some columns to TEXT or BLOBs"
      
      When the ROW_FORMAT=compact or ROW_FORMAT=REDUNDANT is used,
      the BLOB prefix is stored inline along with the row.  So 
      the above error message is changed as follows depending on
      the row format used:
      
      For ROW_FORMAT=COMPRESSED or ROW_FORMAT=DYNAMIC, the error
      message is as follows:
      
      "ERROR 42000: Row size too large (> 8126). Changing some
      columns to TEXT or BLOB may help. In current row format, 
      BLOB prefix of 0 bytes is stored inline."
      
      For ROW_FORMAT=COMPACT or ROW_FORMAT=REDUNDANT, the error
      message is as follows:
      
      "ERROR 42000: Row size too large (> 8126). Changing some
      columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or 
      ROW_FORMAT=COMPRESSED may help. In current row
      format, BLOB prefix of 768 bytes is stored inline."
      
      rb://1252 approved by Marko Makela
      e5817934
    • unknown's avatar
      No commit message · 237b124b
      unknown authored
      No commit message
      237b124b
  18. 30 Aug, 2012 2 commits
    • Marko Mäkelä's avatar
      Bug#14554000 CRASH IN PAGE_REC_GET_NTH_CONST(NTH=0) DURING COMPRESSED · d608e1ab
      Marko Mäkelä authored
      PAGE SPLIT
      
      page_rec_get_nth_const(): Map nth==0 to the page infimum.
      
      btr_compress(adjust=TRUE): Add a debug assertion for nth>0. The cursor
      should never be positioned on the page infimum.
      
      btr_index_page_validate(): Add test instrumentation for checking the
      return values of page_rec_get_nth_const() during CHECK TABLE, and for
      checking that the page directory slot 0 always contains only one
      record, the predefined page infimum record.
      
      page_cur_delete_rec(), page_delete_rec_list_end(): Add debug
      assertions guarding against accessing the page slot 0.
      
      page_copy_rec_list_start(): Clarify a comment about ret_pos==0.
      
      rb:1248 approved by Jimmy Yang
      d608e1ab
    • Marko Mäkelä's avatar
      Bug#14547952: DEBUG BUILD FAILS ASSERTION IN RECORDS_IN_RANGE() · e8a59559
      Marko Mäkelä authored
      ha_innodb::records_in_range(): Remove a debug assertion
      that prohibits an open range (full table).
      
      The patch by Jorgen Loland only removed the assertion from the
      built-in InnoDB, not from the InnoDB Plugin.
      e8a59559
  19. 28 Aug, 2012 1 commit
  20. 21 Aug, 2012 1 commit
    • Marko Mäkelä's avatar
      Fix regression from Bug#12845774 OPTIMISTIC INSERT/UPDATE USES WRONG · 280057f6
      Marko Mäkelä authored
      HEURISTICS FOR COMPRESSED PAGE SIZE
      
      The fix of Bug#12845774 was supposed to skip known-to-fail
      btr_cur_optimistic_insert() calls. There was only one such call, in
      btr_cur_pessimistic_update(). All other callers of
      btr_cur_pessimistic_insert() would release and reacquire the B-tree
      page latch before attempting the pessimistic insert. This would allow
      other threads to restructure the B-tree, allowing (and requiring) the
      insert to succeed as an optimistic (single-page) operation.
      
      Failure to attempt an optimistic insert before a pessimistic one would
      trigger an attempt to split an empty page.
      
      rb:1234 approved by Sunny Bains
      280057f6
  21. 20 Aug, 2012 2 commits
  22. 17 Aug, 2012 2 commits
  23. 16 Aug, 2012 3 commits
    • Marko Mäkelä's avatar
      Bug#12595091 POSSIBLY INVALID ASSERTION IN BTR_CUR_PESSIMISTIC_UPDATE() · 5888f213
      Marko Mäkelä authored
      Facebook got a case where the page compresses really well so that
      btr_cur_optimistic_update() returns DB_UNDERFLOW, but when a record
      gets updated, the compression rate radically changes so that
      btr_cur_insert_if_possible() can not insert in place despite
      reorganizing/recompressing the page, leading to the assertion failing.
      
      rb:1220 approved by Sunny Bains
      5888f213
    • Marko Mäkelä's avatar
      Bug#12845774 OPTIMISTIC INSERT/UPDATE USES WRONG HEURISTICS FOR · 76b52a2c
      Marko Mäkelä authored
      COMPRESSED PAGE SIZE
      
      This was submitted as MySQL Bug 61456 and a patch provided by
      Facebook. This patch follows the same idea, but instead of adding a
      parameter to btr_cur_pessimistic_insert(), we simply remove the
      btr_cur_optimistic_insert() call there and add it to the only caller
      that needs it.
      
      btr_cur_pessimistic_insert(): Do not try btr_cur_optimistic_insert().
      
      btr_insert_on_non_leaf_level_func(): Invoke btr_cur_optimistic_insert()
      before invoking btr_cur_pessimistic_insert().
      
      btr_cur_pessimistic_update(): Clarify in a comment why it is not
      necessary to invoke btr_cur_optimistic_insert().
      
      btr_root_raise_and_insert(): Assert that the root page is not empty.
      This could happen if a pessimistic insert (involving a split or merge)
      is performed without first attempting an optimistic (intra-page) insert.
      
      rb:1219 approved by Sunny Bains
      76b52a2c
    • Marko Mäkelä's avatar
      Bug#13523839 ASSERTION FAILURES ON COMPRESSED INNODB TABLES · 12e3c0f9
      Marko Mäkelä authored
      btr_cur_optimistic_insert(): Remove a bogus assertion. The insert may
      fail after reorganizing the page.
      
      btr_cur_optimistic_update(): Do not attempt to reorganize compressed pages,
      because compression may fail after reorganization.
      
      page_copy_rec_list_start(): Use page_rec_get_nth() to restore to the
      ret_pos, which may also be the page infimum.
      
      rb:1221
      12e3c0f9