1. 05 Feb, 2013 4 commits
  2. 04 Feb, 2013 2 commits
  3. 01 Feb, 2013 3 commits
  4. 31 Jan, 2013 6 commits
    • Gleb Shchepa's avatar
      Bug #11827369: ASSERTION FAILED: !THD->LEX->CONTEXT_ANALYSIS_ONLY · 4743ba00
      Gleb Shchepa authored
      Manual up-merge from 5.1 to 5.5.
      4743ba00
    • Gleb Shchepa's avatar
      Bug #11827369: ASSERTION FAILED: !THD->LEX->CONTEXT_ANALYSIS_ONLY · 2993c299
      Gleb Shchepa authored
      Some queries with the "SELECT ... FROM DUAL" nested subqueries
      failed with an assertion on debug builds.
      Non-debug builds were not affected.
      
      There were a few different issues with similar assertion
      failures on different queries:
      
      1. The first problem was related to the incomplete propagation
      of the "non-constant" item status from underlying subquery
      items to the outer item tree: in some cases non-constants were
      interpreted as constants and evaluated at the preparation stage
      (val_int() calls withing fix_fields() etc).
      
      Thus, the default implementation of Item_ref::const_item() from
      the Item parent class didn't take into account the "const_item"
      status of the referenced item tree -- it used the insufficient
      "used_tables() == 0" check instead. This worked in most cases
      since our "non-constant" functions like RAND() and SLEEP() set
      the RAND_TABLE_BIT in the used table map, so they aren't
      non-constant from Item_ref's "point of view". However, the
      "SELECT ... FROM DUAL" subquery may have an empty map of used
      tables, but at the same time subqueries are never "constant" at
      the context analysis stage (preparation, view creation etc).
      So, the non-contantness of such subqueries was missed.
      
      Fix: the Item_ref::const_item() function has been overloaded to
      take into account both (*ref)->const_item() status and tricky
      Item_ref::used_tables() return values, since the only
      (*ref)->const_item() call is not enough there.
      
      2. In some cases instead of the const_item() call we check a
      value of the Item::with_subselect field to recognize items
      with nested subqueries. However, the Item_ref class didn't
      propagate this value from the referenced item tree.
      
      Fix: Item::has_subquery() and Item_ref::has_subquery()
      functions have been backported from 5.6. All direct
      references to the with_subselect fields of nested items have
      been replaced with the has_subquery() function call.
      
      3. The Item_func_regex class didn't propagate with_subselect
      as well, since it overloads the Item_func::fix_fields()
      function with insufficient fix_fields() implementation.
      
      Fix: the Item_func_regex::fix_fields() function has been
      modified to gather "constant" statuses from inner items.
      
      4. The Item_func_isnull::update_used_tables() function has
      a special branch for the underlying item where the maybe_null
      value is false: in this case it marks the Item_func_isnull
      as a "const_item" and sets the cached_value to false.
      However, the Item_func_isnull::val_int() was not in sync with
      update_used_tables(): it didn't take into account neither
      const_item_cache nor cached_value for the case of
      "args[0]->maybe_null == false optimization".
      As far as such an Item_func_isnull has "const_item() == true",
      it's ok to call Item_func_isnull::val_int() etc from outer
      items on preparation stage. In this case the server tried to
      call Item_func_isnull::args[0]->isnull(), and if the args[0]
      item contained a nested not-nullable subquery, it failed
      with an assertion.
      
      Fix: take the value of Item_func_isnull::const_item_cache into
      account in the val_int() function.
      
      5. The auxiliary Item_is_not_null_test class has a similar
      optimization in the update_used_tables() function as the
      Item_func_isnull class has, and the same issue in the val_int()
      function.
      In addition to that the Item_is_not_null_test::update_used_tables()
      doesn't update the const_item_cache value, so the "maybe_null"
      optimization is useless there. Thus, we missed some optimizations
      of cases like these (before and after the fix):
        <  <is_not_null_test>(a),
        ---
        >  <cache>(<is_not_null_test>(a)),
      or
        < having (<is_not_null_test>(a) and <is_not_null_test>(a))
        ---
        > having 1
      etc.
      
      Fix: update Item_is_not_null_test::const_item_cache in
      update_used_tables() and take in into account in val_int().
      2993c299
    • Yasufumi Kinoshita's avatar
      merge to mysql-5.5 from mysql-5.1 · f57cba0c
      Yasufumi Kinoshita authored
      f57cba0c
    • Yasufumi Kinoshita's avatar
      Bug #16220051 : INNODB_BUG12400341 FAILS ON VALGRIND WITH TOO MANY ACTIVE CONCURRENT TRANSACTION · 5656b9dd
      Yasufumi Kinoshita authored
      innodb_bug12400341.test is disabled for valgrind daily test.
      It might be affected by the previous test's undo slots existing,
      because of slower execution.
      5656b9dd
    • Chaithra Gopalareddy's avatar
      Bug#14096619: UNABLE TO RESTORE DATABASE DUMP · 2b4a942a
      Chaithra Gopalareddy authored
      Backport of fix for Bug#13581962
      2b4a942a
    • Chaithra Gopalareddy's avatar
      Bug#14096619: UNABLE TO RESTORE DATABASE DUMP · 082ac987
      Chaithra Gopalareddy authored
      Backport of Bug#13581962
      082ac987
  5. 30 Jan, 2013 6 commits
    • Mattias Jonsson's avatar
      Bug#14521864: MYSQL 5.1 TO 5.5 BUGS PARTITIONING · f693203e
      Mattias Jonsson authored
      Due to an internal change in the server code in between 5.1 and 5.5
      (wl#2649) the hash function used in KEY partitioning changed
      for numeric and date/time columns (from binary hash calculation
      to character based hash calculation).
      
      Also enum/set changed from latin1 ci based hash calculation to
      binary hash between 5.1 and 5.5. (bug#11759782).
      
      These changes makes KEY [sub]partitioned tables on any of
      the affected column types incompatible with 5.5 and above,
      since the calculation of partition id differs.
      
      Also since InnoDB asserts that a deleted row was previously
      read (positioned), the server asserts on delete of a row that
      is in the wrong partition.
      
      The solution for this situation is:
      
      1) The partitioning engine will check that delete/update will go to the
      partition the row was read from and give an error otherwise, consisting
      of the rows partitioning fields. This will avoid asserts in InnoDB and
      also alert the user that there is a misplaced row. A detailed error
      message will be given, including an entry to the error log consisting
      of both table name, partition and row content (PK if exists, otherwise
      all partitioning columns).
      
      
      2) A new optional syntax for KEY () partitioning in 5.5 is allowed:
      [SUB]PARTITION BY KEY [ALGORITHM = N] (list_of_cols)
      Where N = 1 uses the same hashing as 5.1 (Numeric/date/time fields uses
      binary hashing, ENUM/SET uses charset hashing) N = 2 uses the same
      hashing as 5.5 (Numeric/date/time fields uses charset hashing,
      ENUM/SET uses binary hashing). If not set on CREATE/ALTER it will
      default to 2.
      
      This new syntax should probably be ignored by NDB.
      
      
      3) Since there is a demand for avoiding scanning through the full
      table, during upgrade the ALTER TABLE t PARTITION BY ... command is
      considered a no-op (only .frm change) if everything except ALGORITHM
      is the same and ALGORITHM was not set before, which allows manually
      upgrading such table by something like:
      ALTER TABLE t PARTITION BY KEY ALGORITHM = 1 () or
      ALTER TABLE t PARTITION BY KEY ALGORITHM = 2 ()
      
      
      4) Enhanced partitioning with CHECK/REPAIR to also check for/repair
      misplaced rows. (Also works for ALTER TABLE t CHECK/REPAIR PARTITION)
      
      CHECK FOR UPGRADE:
      If the .frm version is < 5.5.3
      and uses KEY [sub]partitioning
      and an affected column type
      then it will fail with an message:
      KEY () partitioning changed, please run:
      ALTER TABLE `test`.`t1`  PARTITION BY KEY ALGORITHM = 1 (a)
      PARTITIONS 12
      (i.e. current partitioning clause, with the addition of
      ALGORITHM = 1)
      
      CHECK without FOR UPGRADE:
      if MEDIUM (default) or EXTENDED options are given:
      Scan all rows and verify that it is in the correct partition.
      Fail for the first misplaced row.
      
      REPAIR:
      if default or EXTENDED (i.e. not QUICK/USE_FRM):
      Scan all rows and every misplaced row is moved into its correct
      partitions.
      
      
      5) Updated mysqlcheck (called by mysql_upgrade) to handle the
      new output from CHECK FOR UPGRADE, to run the ALTER statement
      instead of running REPAIR.
      
      This will allow mysql_upgrade (or CHECK TABLE t FOR UPGRADE) to upgrade
      a KEY [sub]partitioned table that has any affected field type
      and a .frm version < 5.5.3 to ALGORITHM = 1 without rebuild.
      
      
      Also notice that if the .frm has a version of >= 5.5.3 and ALGORITHM
      is not set, it is not possible to know if it consists of rows from
      5.1 or 5.5! In these cases I suggest that the user does:
      (optional)
      LOCK TABLE t WRITE;
      SHOW CREATE TABLE t;
      (verify that it has no ALGORITHM = N, and to be safe, I would suggest
      backing up the .frm file, to be used if one need to change to another
      ALGORITHM = N, without needing to rebuild/repair)
      ALTER TABLE t <old partitioning clause, but with ALGORITHM = N>;
      which should set the ALGORITHM to N (if the table has rows from
      5.1 I would suggest N = 1, otherwise N = 2)
      CHECK TABLE t;
      (here one could use the backed up .frm instead and change to a new N
      and run CHECK again and see if it passes)
      and if there are misplaced rows:
      REPAIR TABLE t;
      (optional)
      UNLOCK TABLES;
      f693203e
    • mysql-builder@oracle.com's avatar
      No commit message · 08b0d549
      mysql-builder@oracle.com authored
      No commit message
      08b0d549
    • mysql-builder@oracle.com's avatar
      No commit message · d37076cd
      mysql-builder@oracle.com authored
      No commit message
      d37076cd
    • Aditya A's avatar
      Bug#14756795 SELECT FROM NEW INNODB I_S TABLES CRASHES SERVER · dd5beeac
      Aditya A authored
                    WITH --SKIP-INNODB
      
      Description
      -----------
      
      If the server is started with skip-innodb or InnoDB otherwise fails to
      start, any one of these queries will crash the server:
      
      For (5.5) 
      SELECT * FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE;
      SELECT * FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE_LRU;
      SELECT * FROM INFORMATION_SCHEMA.INNODB_BUFFER_POOL_STATS;
      
      In (5.6+) ,following queries will also crash the server.
      
      SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES;
      SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_INDEXES;
      SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_COLUMNS;
      SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_FIELDS;
      SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_FOREIGN;
      SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_FOREIGN_COLS;
      SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLESTATS;
      SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_DATAFILES;
      SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES;
      
      FIX
      ----
      
      When Innodb is not active we must prevent it from processing
      these tables,so we return a warning saying that innodb is not
      active.
      
      Approved by marko (http://rb.no.oracle.com/rb/r/1891)
      dd5beeac
    • Krunal Bauskar krunal.bauskar@oracle.com's avatar
      - BUG#1608883: KILLING A QUERY INSIDE INNODB CAUSES IT TO EVENTUALLY CRASH · 7485b6c6
        WITH AN ASSERTION
        Null merge from mysql-5.1
        
      7485b6c6
    • Krunal Bauskar krunal.bauskar@oracle.com's avatar
      - BUG#1608883: KILLING A QUERY INSIDE INNODB CAUSES IT TO EVENTUALLY CRASH · 1853fd95
        WITH AN ASSERTION
      
        Correcting the build failure that was caused because of changes 
        checked-in to below mentioned revision.
        (Changes: DEBUG_SYNC_C should be disabled for innodb_plugin under
         Windows enviornment. Note: only for innodb_plugin.)
      
        revno: 3915
        revision-id: krunal.bauskar@oracle.com-20130114051951-ang92lkirop37431
        parent: nisha.gopalakrishnan@oracle.com-20130112054337-gk5pmzf30d2imuw7
        committer: Krunal Bauskar krunal.bauskar@oracle.com
        branch nick: mysql-5.1
        timestamp: Mon 2013-01-14 10:49:51 +0530
      
      1853fd95
  6. 29 Jan, 2013 2 commits
  7. 28 Jan, 2013 5 commits
  8. 26 Jan, 2013 1 commit
    • Venkatesh Duggirala's avatar
      Bug#16056813-MEMORY LEAK ON FILTERED SLAVE · 66177b9c
      Venkatesh Duggirala authored
      Due to not resetting a member (last_added) of 
      Deferred events class inside a clean up function
      (Deferred_log_events::rewind), there is a memory
      leak on filtered slaves.
      
      Fix:
      Resetting last_added to NULL in rewind() function.
      66177b9c
  9. 24 Jan, 2013 5 commits
    • mysql-builder@oracle.com's avatar
      No commit message · 8818be2a
      mysql-builder@oracle.com authored
      No commit message
      8818be2a
    • Venkata Sidagam's avatar
      BUG#11908153 CRASH AND/OR VALGRIND ERRORS IN FIELD_BLOB::GET_KEY_IMAGE · 5674d559
      Venkata Sidagam authored
      Backporting bug patch from 5.5 to 5.1.
      This fix is applicable to BUG#14362617 as well
      5674d559
    • Venkata Sidagam's avatar
      Bug #11752803 SERVER CRASHES IF MAX_CONNECTIONS DECREASED BELOW · 3ce55920
      Venkata Sidagam authored
                     CERTAIN LEVEL
      
      Merging from 5.1 to 5.5
      3ce55920
    • Venkata Sidagam's avatar
      Bug #11752803 SERVER CRASHES IF MAX_CONNECTIONS DECREASED BELOW · d0181929
      Venkata Sidagam authored
                     CERTAIN LEVEL
            
      Problem description: mysqld crashes when we update the max_connections 
      variable to lesser value than the number of currently open connections.
            
      Analysis: The "alarm_queue.max_elements" size will be decided at the 
      server start time and it will get modified if we change max_connections 
      value. In the current scenario the value of "alarm_queue.max_elements" 
      is decremented when the max_connections is set to 2. When updating the  
      "alarm_queue.max_elements" value we are not updating "max_used_alarms" 
      value. Hence, instead of getting the warning "thr_alarm queue is full" 
      it is ending up in asserting the server at the time of inserting new 
      elements in the queue.
            
      Fix: the fix is to dynamically increase the size of the alarm_queue.
      In order to do that, queue_insert_safe() should be used instead if
      queue_insert().
      d0181929
    • Venkatesh Duggirala's avatar
      BUG#14798572: REMOVE UNUSED VARIABLE BINLOG_CAN_BE_CORRUPTED · c156de4e
      Venkatesh Duggirala authored
      FROM MYSQL_BINLOG_SEND
      
      As part Bug #11747416 A DISK FULL MAKES BINARY LOG CORRUPT,
      reading the variable "binlog_can_be_corrupted" was removed
      In the existing code the value of this variable is only set,
      never read. And also this issue causing compiler warnings.
      So the variable is completely redundant and should be removed.
      c156de4e
  10. 23 Jan, 2013 3 commits
    • Yasufumi Kinoshita's avatar
      Merge mysql-5.1 to mysql-5.5. · b6095e5d
      Yasufumi Kinoshita authored
      b6095e5d
    • Yasufumi Kinoshita's avatar
      Bug #16089381 : POSSIBLE NUMBER UNDERFLOW AROUND CALLING PAGE_ZIP_EMPTY_SIZE() · 6083ae52
      Yasufumi Kinoshita authored
      some callers for page_zip_empty_size() ignored possibility its returning 0, and could cause underflow.
      
      rb#1837 approved by Marko
      6083ae52
    • Gleb Shchepa's avatar
      Bug #11827369: ASSERTION FAILED: !THD->LEX->CONTEXT_ANALYSIS_ONLY · e53345f0
      Gleb Shchepa authored
      Some queries with the "SELECT ... FROM DUAL" nested subqueries
      failed with an assertion on debug builds.
      Non-debug builds were not affected.
      
      There were a few different issues with similar assertion
      failures on different queries:
      
      1. The first problem was related to the incomplete propagation
      of the "non-constant" item status from underlying subquery
      items to the outer item tree: in some cases non-constants were
      interpreted as constants and evaluated at the preparation stage
      (val_int() calls withing fix_fields() etc).
      
      Thus, the default implementation of Item_ref::const_item() from
      the Item parent class didn't take into account the "const_item"
      status of the referenced item tree -- it used the insufficient
      "used_tables() == 0" check instead. This worked in most cases
      since our "non-constant" functions like RAND() and SLEEP() set
      the RAND_TABLE_BIT in the used table map, so they aren't
      non-constant from Item_ref's "point of view". However, the
      "SELECT ... FROM DUAL" subquery may have an empty map of used
      tables, but at the same time subqueries are never "constant" at
      the context analysis stage (preparation, view creation etc).
      So, the non-contantness of such subqueries was missed.
      
      Fix: the Item_ref::const_item() function has been overloaded to
      take into account both (*ref)->const_item() status and tricky
      Item_ref::used_tables() return values, since the only
      (*ref)->const_item() call is not enough there.
      
      2. In some cases instead of the const_item() call we check a
      value of the Item::with_subselect field to recognize items
      with nested subqueries. However, the Item_ref class didn't
      propagate this value from the referenced item tree.
      
      Fix: Item::has_subquery() and Item_ref::has_subquery()
      functions have been backported from 5.6. All direct
      references to the with_subselect fields of nested items have
      been with the has_subquery() function call.
      
      3. The Item_func_regex class didn't propagate with_subselect
      as well, since it overloads the Item_func::fix_fields()
      function with insufficient fix_fields() implementation.
      
      Fix: the Item_func_regex::fix_fields() function has been
      modified to gather "constant" statuses from inner items.
      
      4. The Item_func_isnull::update_used_tables() function has
      a special branch for the underlying item where the maybe_null
      value is false: in this case it marks the Item_func_isnull
      as a "const_item" and sets the cached_value to false.
      However, the Item_func_isnull::val_int() was not in sync with
      update_used_tables(): it didn't take into account neither
      const_item_cache nor cached_value for the case of
      "args[0]->maybe_null == false optimization".
      As far as such an Item_func_isnull has "const_item() == true",
      it's ok to call Item_func_isnull::val_int() etc from outer
      items on preparation stage. In this case the server tried to
      call Item_func_isnull::args[0]->isnull(), and if the args[0]
      item contained a nested not-nullable subquery, it failed
      with an assertion.
      
      Fix: take the value of Item_func_isnull::const_item_cache into
      account in the val_int() function.
      
      5. The auxiliary Item_is_not_null_test class has a similar
      optimization in the update_used_tables() function as the
      Item_func_isnull class has, and the same issue in the val_int()
      function.
      In addition to that the Item_is_not_null_test::update_used_tables()
      doesn't update the const_item_cache value, so the "maybe_null"
      optimization is useless there. Thus, we missed some optimizations
      of cases like these (before and after the fix):
        <  <is_not_null_test>(a),
        ---
        >  <cache>(<is_not_null_test>(a)),
      or
        < having (<is_not_null_test>(a) and <is_not_null_test>(a))
        ---
        > having 1
      etc.
      
      Fix: update Item_is_not_null_test::const_item_cache in
      update_used_tables() and take in into account in val_int().
      e53345f0
  11. 21 Jan, 2013 2 commits
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to mysql-5.5. · fa4c629c
      Marko Mäkelä authored
      fa4c629c
    • Marko Mäkelä's avatar
      Bug#16067973 DROP TABLE SLOW WHEN IT DECOMPRESS COMPRESSED-ONLY PAGES · f130ccc1
      Marko Mäkelä authored
      buf_page_get_gen(): Do not attempt to decompress a compressed-only
      page when mode == BUF_PEEK_IF_IN_POOL. This mode is only being used by
      btr_search_drop_page_hash_when_freed(). There cannot be any adaptive
      hash index pointing to a page that does not exist in uncompressed
      format in the buffer pool.
      
      innodb_buffer_pool_evict_update(): New function for debug builds, to handle
      SET GLOBAL innodb_buffer_pool_evicted='uncompressed'
      by evicting all uncompressed page frames of compressed tablespaces
      from the buffer pool.
      
      rb#1873 approved by Jimmy Yang
      f130ccc1
  12. 19 Jan, 2013 1 commit