1. 19 Sep, 2012 1 commit
    • Marko Mäkelä's avatar
      Bug#14636528 INNODB CHANGE BUFFERING IS NOT ENTIRELY CRASH-SAFE · b3e0fa54
      Marko Mäkelä authored
      Delete-mark change buffer records when resorting to a pessimistic
      delete from the change buffer B-tree. Skip delete-marked records in
      the change buffer merge and when estimating whether an operation can
      be buffered. Without this fix, we could try to apply the same buffered
      changes multiple times if the server was killed at the right moment.
      
      In MySQL 5.5 and later: ibuf_get_volume_buffered_count_func(): Ignore
      delete-marked (already processed) records.
      
      ibuf_delete_rec(): Add a crash point before optimistic delete. If the
      optimistic delete fails, flag the record processed before
      mtr_commit().
      
      ibuf_merge_or_delete_for_page(): Ignore delete-marked (already
      processed) records.
      
      Backport to 5.1: Rename btr_cur_del_unmark_for_ibuf() to
      btr_cur_set_deleted_flag_for_ibuf() and add a parameter.
      
      rb:1307 approved by Jimmy Yang
      b3e0fa54
  2. 17 Sep, 2012 4 commits
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to working copy. · 45d56fc0
      Marko Mäkelä authored
      45d56fc0
    • Harin Vadodaria's avatar
      Bug#11753779: MAX_CONNECT_ERRORS WORKS ONLY WHEN 1ST · b03ed386
      Harin Vadodaria authored
                    INC_HOST_ERRORS() IS CALLED.
      
      Issue       : Sequence of calling inc_host_errors()
                    and reset_host_errors() required some
                    changes in order to maintain correct
                    connection error count.
      
      Solution    : Call to reset_host_errors() is shifted
                    to a location after which no calls to
                    inc_host_errors() are made.
      b03ed386
    • Marko Mäkelä's avatar
      Bug#12701488 ASSERT PAGE_ZIP_VALIDATE, UNIV_ZIP_DEBUG · cff9c64b
      Marko Mäkelä authored
      page_zip_validate(), page_zip_validate_low(): Add a parameter for the
      B-tree index.
      
      page_zip_validate_low(): If the page contents does not match, check
      that the record link chains match. Furthermore, if dict_index_t is
      passed, check that the records match. (This reduces coverage a bit: if
      index=NULL, we will ignore differences in record contents, that is,
      the page payload.)
      
      rb:1264 approved by Inaam Rana
      cff9c64b
    • Sujatha Sivakumar's avatar
      Bug#11750014:ASSERTION TRX_DATA->EMPTY() IN BINLOG_CLOSE_CONNECTION · cf642d27
      Sujatha Sivakumar authored
      Problem:
      =======
      
      trx_data->empty() assert happens at `binlog_close_connection'
      
      Analysis:
      ========
      
      trx_data->empty() function checks for no pending events
      and the transaction cache to be empty.This function returns
      "true" if no pending events are present and cache is empty.
      Otherwise it returns false. `binlog_close_connection' call
      expects the above function to return true. But if the
      return value is false then assert is raised.
      
      This bug was reproducible in a diskfull scenario. In this
      disk full scenario try to do an insert operation so that
      a new pending event is created and flushing this pending
      event fails. Due to this failure the server goes down
      and invokes `binlog_close_connection' for clean closure.
      Since the pending event still remains the assert is caused.
      This assert is caused only in non transactional databases.
      
      
      Fix:
      ===
      
      In a disk full scenario when the insertion fails the
      transaction is rolled back and `binlog_end_trans`
      is called to flush the pending events. But flush operation
      fails as the disk is full and the function simply returns
      `1' without taking any action to delete the pending event.
      
      This leaves the event to remain till the closure of
      connection.  `delete pending' statement has been added to 
      do the required clean up action.
      
      sql/log.cc:
        Added "delete pending" statement to clean pending event
      cf642d27
  3. 12 Sep, 2012 2 commits
  4. 10 Sep, 2012 2 commits
    • Andrei Elkin's avatar
      merge bug14597605 to the main repo. · f876fac1
      Andrei Elkin authored
      f876fac1
    • Andrei Elkin's avatar
      Bug#14597605 Issue with Null-value user on slave · 3896f010
      Andrei Elkin authored
      An "orthographic" typo in User_var::set_deferred() was made in fixes for
      bug@14275000. While editing the signature of the initial patch to remove
      the only argument, the assigned value of the argument remained in the body ... 
      to be successfully compiled (!) thanks to names coincidence:
      the arg to User_var method and its member.
      
      Fixed with correcting the typo.
      3896f010
  5. 07 Sep, 2012 1 commit
  6. 05 Sep, 2012 1 commit
  7. 03 Sep, 2012 1 commit
  8. 31 Aug, 2012 2 commits
    • Annamalai Gurusami's avatar
      Bug #13453036 ERROR CODE 1118: ROW SIZE TOO LARGE - EVEN · e5817934
      Annamalai Gurusami authored
      THOUGH IT IS NOT.
      
      The following error message is misleading because it claims 
      that the BLOB space is not counted.  
      
      "ERROR 1118 (42000): Row size too large. The maximum row size for 
      the used table type, not counting BLOBs, is 8126. You have to 
      change some columns to TEXT or BLOBs"
      
      When the ROW_FORMAT=compact or ROW_FORMAT=REDUNDANT is used,
      the BLOB prefix is stored inline along with the row.  So 
      the above error message is changed as follows depending on
      the row format used:
      
      For ROW_FORMAT=COMPRESSED or ROW_FORMAT=DYNAMIC, the error
      message is as follows:
      
      "ERROR 42000: Row size too large (> 8126). Changing some
      columns to TEXT or BLOB may help. In current row format, 
      BLOB prefix of 0 bytes is stored inline."
      
      For ROW_FORMAT=COMPACT or ROW_FORMAT=REDUNDANT, the error
      message is as follows:
      
      "ERROR 42000: Row size too large (> 8126). Changing some
      columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or 
      ROW_FORMAT=COMPRESSED may help. In current row
      format, BLOB prefix of 768 bytes is stored inline."
      
      rb://1252 approved by Marko Makela
      e5817934
    • unknown's avatar
      No commit message · 237b124b
      unknown authored
      No commit message
      237b124b
  9. 30 Aug, 2012 2 commits
    • Marko Mäkelä's avatar
      Bug#14554000 CRASH IN PAGE_REC_GET_NTH_CONST(NTH=0) DURING COMPRESSED · d608e1ab
      Marko Mäkelä authored
      PAGE SPLIT
      
      page_rec_get_nth_const(): Map nth==0 to the page infimum.
      
      btr_compress(adjust=TRUE): Add a debug assertion for nth>0. The cursor
      should never be positioned on the page infimum.
      
      btr_index_page_validate(): Add test instrumentation for checking the
      return values of page_rec_get_nth_const() during CHECK TABLE, and for
      checking that the page directory slot 0 always contains only one
      record, the predefined page infimum record.
      
      page_cur_delete_rec(), page_delete_rec_list_end(): Add debug
      assertions guarding against accessing the page slot 0.
      
      page_copy_rec_list_start(): Clarify a comment about ret_pos==0.
      
      rb:1248 approved by Jimmy Yang
      d608e1ab
    • Marko Mäkelä's avatar
      Bug#14547952: DEBUG BUILD FAILS ASSERTION IN RECORDS_IN_RANGE() · e8a59559
      Marko Mäkelä authored
      ha_innodb::records_in_range(): Remove a debug assertion
      that prohibits an open range (full table).
      
      The patch by Jorgen Loland only removed the assertion from the
      built-in InnoDB, not from the InnoDB Plugin.
      e8a59559
  10. 28 Aug, 2012 1 commit
  11. 21 Aug, 2012 1 commit
    • Marko Mäkelä's avatar
      Fix regression from Bug#12845774 OPTIMISTIC INSERT/UPDATE USES WRONG · 280057f6
      Marko Mäkelä authored
      HEURISTICS FOR COMPRESSED PAGE SIZE
      
      The fix of Bug#12845774 was supposed to skip known-to-fail
      btr_cur_optimistic_insert() calls. There was only one such call, in
      btr_cur_pessimistic_update(). All other callers of
      btr_cur_pessimistic_insert() would release and reacquire the B-tree
      page latch before attempting the pessimistic insert. This would allow
      other threads to restructure the B-tree, allowing (and requiring) the
      insert to succeed as an optimistic (single-page) operation.
      
      Failure to attempt an optimistic insert before a pessimistic one would
      trigger an attempt to split an empty page.
      
      rb:1234 approved by Sunny Bains
      280057f6
  12. 20 Aug, 2012 2 commits
  13. 17 Aug, 2012 2 commits
  14. 16 Aug, 2012 3 commits
    • Marko Mäkelä's avatar
      Bug#12595091 POSSIBLY INVALID ASSERTION IN BTR_CUR_PESSIMISTIC_UPDATE() · 5888f213
      Marko Mäkelä authored
      Facebook got a case where the page compresses really well so that
      btr_cur_optimistic_update() returns DB_UNDERFLOW, but when a record
      gets updated, the compression rate radically changes so that
      btr_cur_insert_if_possible() can not insert in place despite
      reorganizing/recompressing the page, leading to the assertion failing.
      
      rb:1220 approved by Sunny Bains
      5888f213
    • Marko Mäkelä's avatar
      Bug#12845774 OPTIMISTIC INSERT/UPDATE USES WRONG HEURISTICS FOR · 76b52a2c
      Marko Mäkelä authored
      COMPRESSED PAGE SIZE
      
      This was submitted as MySQL Bug 61456 and a patch provided by
      Facebook. This patch follows the same idea, but instead of adding a
      parameter to btr_cur_pessimistic_insert(), we simply remove the
      btr_cur_optimistic_insert() call there and add it to the only caller
      that needs it.
      
      btr_cur_pessimistic_insert(): Do not try btr_cur_optimistic_insert().
      
      btr_insert_on_non_leaf_level_func(): Invoke btr_cur_optimistic_insert()
      before invoking btr_cur_pessimistic_insert().
      
      btr_cur_pessimistic_update(): Clarify in a comment why it is not
      necessary to invoke btr_cur_optimistic_insert().
      
      btr_root_raise_and_insert(): Assert that the root page is not empty.
      This could happen if a pessimistic insert (involving a split or merge)
      is performed without first attempting an optimistic (intra-page) insert.
      
      rb:1219 approved by Sunny Bains
      76b52a2c
    • Marko Mäkelä's avatar
      Bug#13523839 ASSERTION FAILURES ON COMPRESSED INNODB TABLES · 12e3c0f9
      Marko Mäkelä authored
      btr_cur_optimistic_insert(): Remove a bogus assertion. The insert may
      fail after reorganizing the page.
      
      btr_cur_optimistic_update(): Do not attempt to reorganize compressed pages,
      because compression may fail after reorganization.
      
      page_copy_rec_list_start(): Use page_rec_get_nth() to restore to the
      ret_pos, which may also be the page infimum.
      
      rb:1221
      12e3c0f9
  15. 15 Aug, 2012 1 commit
    • Mattias Jonsson's avatar
      Bug#13025132 - PARTITIONS USE TOO MUCH MEMORY · 637bfd43
      Mattias Jonsson authored
      The buffer for the current read row from each partition
      (m_ordered_rec_buffer) used for sorted reads was
      allocated on open and freed when the ha_partition handler
      was closed or destroyed.
      
      For tables with many partitions and big records this could
      take up too much valuable memory.
      
      Solution is to only allocate the memory when it is needed
      and free it when nolonger needed. I.e. allocate it in
      index_init and free it in index_end (and to handle failures
      also free it on reset, close etc.)
      
      Also only allocating needed memory, according to
      partitioning pruning.
      
      Manually tested that it does not use as much memory and
      releases it after queries.
      637bfd43
  16. 14 Aug, 2012 1 commit
    • Sujatha Sivakumar's avatar
      Bug#13596613:SHOW SLAVE STATUS GIVES WRONG OUTPUT WITH · 9395988c
      Sujatha Sivakumar authored
      MASTER-MASTER AND USING SET USE
      
      Problem:
      =======
      In a master-master set-up, a master can show a wrong
      'SHOW SLAVE STATUS' output.
      
      Requirements:
      - master-master
      - log_slave_updates
      
      This is caused when using SET user-variables and then using
      it to perform writes. From then on the master that performed
      the insert will have a SHOW SLAVE STATUS that is wrong and  
      it will never get updated until a write happens on the other
      master. On"Master A" the "exec_master_log_pos" is not
      getting updated.
      
      Analysis:
      ========
      Slave receives a "User_var" event from the master and after
      applying the event, when "log_slave_updates" option is
      enabled the slave tries to write this applied event into
      its own binary log. At the time of writing this event the
      slave should use the "originating server-id". But in the
      above case the sever always logs the  "user var events"
      by using its global server-id. Due to this in a
      "master-master" replication when the event comes back to the
      originating server the "User_var_event" doesn't get skipped.
      "User_var_events" are context based events and they always
      follow with a query event which marks their end of group.
      Due to the above mentioned problem with "User_var_event"
      logging the "User_var_event" never gets skipped where as
      its corresponding "query_event" gets skipped. Hence the
      "User_var" event always waits for the next "query event"
      and the "Exec_master_log_position" does not get updated
      properly.
      
      Fix:
      ===
      `MYSQL_BIN_LOG::write' function is used to write events
      into binary log. Within this function a new object for
      "User_var_log_event" is created and this new object is used
      to write the "User_var" event in the binlog. "User var"
      event is inherited from "Log_event". This "Log_event" has
      different overloaded constructors. When a "THD" object
      is present "Log_event(thd,...)" constructor should be used
      to initialise the objects and in the absence of a valid
      "THD" object "Log_event()" minimal constructor should be
      used. In the above mentioned problem always default minimal
      constructor was used which is incorrect. This minimal
      constructor is replaced with "Log_event(thd,...)".
      
      sql/log_event.h:
        Replaced the default constructor with another constructor
        which takes "THD" object as an argument.
      9395988c
  17. 11 Aug, 2012 1 commit
  18. 09 Aug, 2012 4 commits
  19. 08 Aug, 2012 1 commit
    • Rohit Kalhans's avatar
      BUG#11757312: MYSQLBINLOG DOES NOT ACCEPT INPUT FROM STDIN · 17baf358
      Rohit Kalhans authored
      WHEN STDIN IS A PIPE
                  
      Problem: Mysqlbinlog does not accept the input from STDIN when 
      STDIN is a pipe. This prevents the users from passing the input file
      through a shell pipe.    
      
      Background: The my_seek() function does not check if the file descriptor
      passed to it is regular (seekable) file. The check_header() function in
      mysqlbinlog calls the my_b_seek() unconditionally and it fails when
      the underlying file is a PIPE.  
                  
      Resolution: We resolve this problem by checking if the underlying file
      is a regular file by using my_fstat() before calling my_b_seek(). 
      If the underlying file is not seekable we skip the call to my_b_seek()
      in check_header().
      
      client/mysqlbinlog.cc:
        Added a check to avoid the my_b_seek() call if the
        underlying file is a PIPE.
      17baf358
  20. 07 Aug, 2012 2 commits
    • Nirbhay Choubey's avatar
      Bug#13928675 MYSQL CLIENT COPYRIGHT NOTICE MUST · 2b0142ad
      Nirbhay Choubey authored
                   SHOW 2012 INSTEAD OF 2011
      
      * Added a new macro to hold the current year :
        COPYRIGHT_NOTICE_CURRENT_YEAR
      * Modified ORACLE_WELCOME_COPYRIGHT_NOTICE macro
        to take the initial year as parameter and pick
        current year from the above mentioned macro.
      2b0142ad
    • Harin Vadodaria's avatar
      Bug#14068244: INCOMPATIBILITY BETWEEN LIBMYSQLCLIENT/LIBMYSQLCLIENT_R · 279eb98b
      Harin Vadodaria authored
                    AND LIBCRYPTO
      
      Problem: libmysqlclient_r exports symbols from yaSSL library which
               conflict with openSSL symbols. This issue is related to symbols
               used by CURL library and are defined in taocrypt. Taocrypt has
               dummy implementation of these functions. Due to this when a
               program which uses libcurl library functions is compiled using
               libmysqlclient_r and libcurl, it hits segmentation fault in
               execution phase.
      
      Solution: MySQL should not be exporting such symbols. However, these
                functions are not used by MySQL code at all. So avoid compiling
                them in the first place.
      279eb98b
  21. 05 Aug, 2012 1 commit
  22. 31 Jul, 2012 1 commit
  23. 27 Jul, 2012 2 commits
  24. 26 Jul, 2012 1 commit
    • Praveenkumar Hulakund's avatar
      BUG#13868860 - LIMIT '5' IS EXECUTED WITHOUT ERROR WHEN '5' · 5887cff7
      Praveenkumar Hulakund authored
                     IS PLACE HOLDER AND USE SERVER-SIDE 
      
      Analysis:
      LIMIT always takes nonnegative integer constant values. 
      
      http://dev.mysql.com/doc/refman/5.6/en/select.html
      
      So parsing of value '5' for LIMIT in SELECT fails.
      
      But, within prepared statement, LIMIT parameters can be
      specified using '?' markers. Value for the parameter can
      be supplied while executing the prepared statement.
      
      Passing string values, float or double value for LIMIT
      works well from CLI. Because, while setting the value
      for the parameters from the variable list (added using
      SET), if the value is for parameter LIMIT then its 
      converted to integer value. 
      
      But, when prepared statement is executed from the other
      interfaces as J connectors, or C applications etc.
      The value for the parameters are sent to the server
      with execute command. Each item in log has value and
      the data TYPE. So, While setting parameter value
      from this log, value is set to all the parameters
      with the same data type as passed.
      But here logic to convert value to integer type
      if its for LIMIT parameter is missing.
      Because of this,string '5' is set to LIMIT.
      And the same is logged into the binlog file too. 
      
      Fix:
      When executing prepared statement having parameter for
      CLI it worked fine, as the value set for the parameter
      is converted to integer. And this failed in other 
      interfaces as J connector,C Applications etc as this 
      conversion is missing.
      
      So, as a fix added check while setting value for the
      parameters. If the parameter is for LIMIT value then
      its converted to integer value.
      5887cff7