1. 06 Oct, 2010 1 commit
    • Mats Kindahl's avatar
      Bug #52131: SET and ENUM stored endian-dependent in binary log · 5868ae75
      Mats Kindahl authored
      Replication SET and ENUM fields from a big-endian to a little-
      endian machine (or the opposite) that are represented using
      more than 1 byte (SET fields with more than 8 members or ENUM
      fields with more than 256 constants) will fail to replicate
      correctly when using row-based replication.
      
      The reason is that there are no pack() or unpack() functions
      for Field_set or Field_enum, which make them rely on Field::pack
      and Field::unpack. These functions pack data as strings, but
      since Field_set and Field_enum use integral types for
      representation, the fields are stored incorrectly on big-endian
      machines.
      
      This patch adds Field_enum::pack and Field_enum::unpack
      functions that store the integral value correctly in the binary
      log even on big-endian machines. Since Field_set inherits from
      Field_enum, it will use the same functions for packing and
      unpacking the field.
      
      sql/field.cc:
        Removing some obsolete debug printouts and adding Field_enum::pack
        and Field_enum::unpack functions.
      sql/field.h:
        Adding helper functions for packing and unpacking 16- and
        24-bit integral types.
        
        Field_short::pack and Field_short::unpack now use these functions.
      sql/rpl_record.cc:
        Removing some obsolete debug printouts and adding some
        more useful ones.
      5868ae75
  2. 02 Oct, 2010 1 commit
  3. 28 Sep, 2010 3 commits
  4. 27 Sep, 2010 1 commit
  5. 28 Sep, 2010 1 commit
  6. 24 Sep, 2010 9 commits
  7. 23 Sep, 2010 1 commit
  8. 22 Sep, 2010 2 commits
  9. 21 Sep, 2010 4 commits
  10. 19 Sep, 2010 1 commit
  11. 17 Sep, 2010 7 commits
    • Davi Arnaut's avatar
      Bug#52419: x86 assembly based atomic CAS causes test failures · 580338dd
      Davi Arnaut authored
      The problem was that the x86 assembly based atomic CAS
      (compare and swap) implementation could copy the wrong
      value to the ebx register, where the cmpxchg8b expects
      to see part of the "comparand" value. Since the original
      value in the ebx register is saved in the stack (that is,
      the push instruction causes the stack pointer to change),
      a wrong offset could be used if the compiler decides to
      put the source of the comparand value in the stack.
      
      The solution is to copy the comparand value directly from
      memory. Since the comparand value is 64-bits wide, it is
      copied in two steps over to the ebx and ecx registers.
      
      include/atomic/x86-gcc.h:
        For reference, an excerpt from a faulty binary follows.
        
        It is a disassembly of my_atomic-t, compiled at -O3 with
        ICC 11.0. Most of the code deals with preparations for
        a atomic cmpxchg8b operation. This instruction compares
        the value in edx:eax with the destination operand. If the
        values are equal, the value in ecx:ebx is stored in the
        destination, otherwise the value in the destination operand
        is copied into edx:eax.
        
        In this case, my_atomic_add64 is implemented as a compare
        and exchange. The addition is done over temporary storage
        and loaded into the destination if the original term value
        is still valid.
        
          volatile int64 a64;
          int64 b=0x1000200030004000LL;
          a64=0;
              mov    0xfffffda8(%ebx),%eax
              xor    %ebp,%ebp
              mov    %ebp,(%eax)
              mov    %ebp,0x4(%eax)
          my_atomic_add64(&a64, b);
              mov    0xfffffda8(%ebx),%ebp      # Load address of a64
              mov    0x0(%ebp),%edx             # Copy value
              mov    0x4(%ebp),%ecx
              mov    %edx,0xc(%esp)             # Assign to tmp var in the stack
              mov    %ecx,0x10(%esp)
              add    $0x30004000,%edx           # Sum values
              adc    $0x10002000,%ecx
              mov    %edx,0x8(%esp)             # Save part of result for later
              mov    0x0(%ebp),%esi             # Copy value of a64 again
              mov    0x4(%ebp),%edi
              mov    0xc(%esp),%eax             # Load the value of a64 used
              mov    0x10(%esp),%edx            # for comparison
              mov    %esi,(%esp)
              mov    %edi,0x4(%esp)
              push   %ebx                       # Push %ebx into stack. Changes esp.
              mov    0x8(%esp),%ebx             # Wrong restore of the result.
              lock cmpxchg8b 0x0(%ebp)
              sete   %cl
              pop    %ebx
      580338dd
    • Alfranio Correia's avatar
      6c6af49f
    • Marc Alff's avatar
      Bug#50557 checksum table crashes server when used in performance_schema · 788cf466
      Marc Alff authored
      CHECKSUM TABLE for performance schema tables could cause uninitialized
      memory reads.
      
      The root cause is a design flaw in the implementation of
      mysql_checksum_table(), which do not honor null fields.
      
      However, fixing this bug in CHECKSUM TABLE is risky, as it can cause the
      checksum value to change.
      
      This fix implements a work around, to systematically reset fields values
      even for null fields, so that the field memory representation is always
      initialized with a known value.
      788cf466
    • Marc Alff's avatar
      local merge · 80004750
      Marc Alff authored
      80004750
    • Alfranio Correia's avatar
      a3fa396e
    • Alfranio Correia's avatar
    • Marc Alff's avatar
      Bug#56832 perfschema.server_init test output not consistent · bffb4c1f
      Marc Alff authored
      Before this fix, the test output for perfschema.server_init would
      vary between executions, because some of the objects tested were
      not guaranteed to exist in all configurations / code paths.
      
      This fix removes these weak tests.
      
      Also, comments referring to abandonned code have been cleaned up.
      bffb4c1f
  12. 16 Sep, 2010 8 commits
    • Sergey Glukhov's avatar
      5.1-bugteam->5.5-merge · a8fa4d44
      Sergey Glukhov authored
      a8fa4d44
    • Sergey Glukhov's avatar
      Bug#50402 Optimizer producing wrong results when using Index Merge on InnoDB · 648386d0
      Sergey Glukhov authored
      Subselect executes twice, at JOIN::optimize stage
      and at JOIN::execute stage. At optimize stage
      Innodb prebuilt struct which is used for the
      retrieval of column values is initialized in.
      ha_innobase::index_read(), prebuilt->sql_stat_start is true.
      After QUICK_ROR_INTERSECT_SELECT finished his job it
      restores read_set/write_set bitmaps with initial values
      and deactivates one of the handlers used by
      QUICK_ROR_INTERSECT_SELECT in JOIN::cleanup
      (it's the case when we reuse original handler as one of
       handlers required by QUICK_ROR_INTERSECT_SELECT object).
      On second subselect execution inactive handler is activated
      in  QUICK_RANGE_SELECT::reset, file->ha_index_init().
      In ha_index_init Innodb prebuilt struct is reinitialized
      with inappropriate read_set/write_set bitmaps. Further
      reinitialization in ha_innobase::index_read() does not
      happen as prebuilt->sql_stat_start is false.
      It leads to partial retrieval of required field values
      and we get a mix of field values from different records
      in the record buffer.
      The fix is to reset
      read_set/write_set bitmaps as these values
      are required for proper intialization of
      internal InnoDB struct which is used for
      the retrieval of column values
      (see build_template(), ha_innodb.cc)
      
      
      mysql-test/include/index_merge_ror_cpk.inc:
        test case
      mysql-test/r/index_merge_innodb.result:
        test case
      mysql-test/r/index_merge_myisam.result:
        test case
      sql/opt_range.cc:
        if ROR merge scan is used we need to reset
        read_set/write_set bitmaps as these values
        are required for proper intialization of
        internal InnoDB struct which is used for
        the retrieval of column values
        (see build_template(), ha_innodb.cc)
      648386d0
    • Magne Mahre's avatar
      Merge from 5.1-bugteam · 72c5af0e
      Magne Mahre authored
      72c5af0e
    • Magne Mahre's avatar
      Bug #54606 innodb fast alter table + pack_keys=0 prevents · fc8fba96
      Magne Mahre authored
                 adding new indexes
      
      A fast alter table requires that the existing (old) table
      and indices are unchanged (i.e only new indices can be
      added).  To verify this, the layout and flags of the old
      table/indices are compared for equality with the new.
      
      The PACK_KEYS option is a no-op in InnoDB, but the flag
      exists, and is used in the table compare.  We need to
      check this (table) option flag before deciding whether an 
      index should be packed or not.  If the table has
      explicitly set PACK_KEYS to 0, the created indices should
      not be marked as packed/packable. 
      fc8fba96
    • Dmitry Shulga's avatar
      5bebedfe
    • Dmitry Shulga's avatar
      Fixed bug#42503 - "Lost connection" errors when using · 0c5890ab
      Dmitry Shulga authored
      compression protocol.
      
      The loss of connection was caused by a malformed packet
      sent by the server in case when query cache was in use.
      When storing data in the query cache, the query  cache
      memory allocation algorithm had a tendency to reduce
      the amount of memory block necessary to store a result
      set, up to finally storing the entire result set in a single
      block. With a significant result set, this memory block
      could turn out to be quite large - 30, 40 MB and on.
      When such a result set was sent to the client, the entire
      memory block was compressed and written to network as a
      single network packet. However, the length of the
      network packet is limited by 0xFFFFFF (16MB), since
      the packet format only allows 3 bytes for packet length.
      As a result, a malformed, overly large packet
      with truncated length would be sent to the client
      and break the client/server protocol.
      
      The solution is, when sending result sets from the query
      cache, to ensure that the data is chopped into
      network packets of size <= 16MB, so that there
      is no corruption of packet length. This solution,
      however, has a shortcoming: since the result set
      is still stored in the query cache as a single block,
      at the time of sending, we've lost boundaries of individual
      logical packets (one logical packet = one row of the result
      set) and thus can end up sending a truncated logical
      packet in a compressed network packet.
      
      As a result, on the client we may require more memory than 
      max_allowed_packet to keep, both, the truncated
      last logical packet, and the compressed next packet.
      This never (or in practice never) happens without compression,
      since without compression it's very unlikely that
      a) a truncated logical packet would remain on the client
      when it's time to read the next packet
      b) a subsequent logical packet that is being read would be
      so large that size-of-new-packet + size-of-old-packet-tail >
      max_allowed_packet.
      To remedy this issue, we send data in 1MB sized packets,
      that's below the current client default of 16MB for
      max_allowed_packet, but large enough to ensure there is no
      unnecessary overhead from too many syscalls per result set.
      
      
      sql/net_serv.cc:
        net_realloc() modified: consider already used memory
        when compare packet buffer length
      sql/sql_cache.cc:
        modified Query_cache::send_result_to_client: send result to client
        in chunks limited by 1 megabyte.
      0c5890ab
    • Mikael Ronstrom's avatar
      c539bc4e
    • Mikael Ronstrom's avatar
  13. 15 Sep, 2010 1 commit
    • Marc Alff's avatar
      Bug#56761 Segfault on CHECKSUM TABLE performance_schema.EVENTS_WAITS_HISTORY EXTENDED · c5ec3b3b
      Marc Alff authored
      Before this fix, the server could crash inside a memcpy when reading data
      from the EVENTS_WAITS_CURRENT / HISTORY / HISTORY_LONG  tables.
      
      The root cause is that the length used in a memcpy could be corrupted,
      when another thread writes data in the wait record being read.
      Reading unsafe data is ok, per design choice, and the code does sanitize
      the data in general, but did not sanitize the length given to memcpy.
      
      The fix is to also sanitize the schema name / object name / file name
      length when extracting the data to produce a row.
      c5ec3b3b